CN111341437B - Digestive tract disease judgment auxiliary system based on tongue image - Google Patents

Digestive tract disease judgment auxiliary system based on tongue image Download PDF

Info

Publication number
CN111341437B
CN111341437B CN202010108365.1A CN202010108365A CN111341437B CN 111341437 B CN111341437 B CN 111341437B CN 202010108365 A CN202010108365 A CN 202010108365A CN 111341437 B CN111341437 B CN 111341437B
Authority
CN
China
Prior art keywords
tongue
digestive tract
image
network
tongue image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010108365.1A
Other languages
Chinese (zh)
Other versions
CN111341437A (en
Inventor
左秀丽
周嘉伟
冯健
李延青
李�真
邵学军
季锐
杨晓云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Medcare Digital Engineering Co ltd
Qilu Hospital of Shandong University
Original Assignee
Qingdao Medcare Digital Engineering Co ltd
Qilu Hospital of Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Medcare Digital Engineering Co ltd, Qilu Hospital of Shandong University filed Critical Qingdao Medcare Digital Engineering Co ltd
Priority to CN202010108365.1A priority Critical patent/CN111341437B/en
Publication of CN111341437A publication Critical patent/CN111341437A/en
Application granted granted Critical
Publication of CN111341437B publication Critical patent/CN111341437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/432Query formulation
    • G06F16/434Query formulation using image data, e.g. images, photos, pictures taken by a user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention belongs to the field of digestive tract disease judgment assistance, and provides a digestive tract disease judgment assistance system based on a tongue image, which comprises a tongue image acquisition part, a tongue image acquisition part and a control part, wherein the tongue image acquisition part is used for acquiring a complete tongue image; the tongue feature processing part is used for extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image; and the digestive tract disease judging part is used for calculating the distance between the TF-IDF vector of the tongue feature text description corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases and recommending the diseases. The method can automatically judge the digestive tract diseases according to the tongue characteristics of the tongue image, is not influenced by time and space ranges, and can improve the accuracy of judging the digestive tract diseases.

Description

Digestive tract disease judgment auxiliary system based on tongue image
Technical Field
The invention belongs to the field of digestive tract disease judgment assistance, and particularly relates to a digestive tract disease judgment assistance system based on a tongue image.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The tongue is one of the important components in the digestive tract of the human body, and consists of a plurality of criss-cross striated muscles, and a special mucous membrane structure is coated outside the tongue. In TCM, the tongue is closely related to the zang-fu organs through meridians and collaterals, especially to the spleen and stomach, so the tongue is often used to infer the disease of digestive tract. The tongue diagnosis is part of the inspection in traditional Chinese medicine, and the changes and characteristics of the tongue, such as tongue texture and tongue coating, are closely related to the diseases of the digestive tract, and are often used as references for determining diagnosis and treatment in traditional Chinese medicine, and play an important role in the clinical skills of traditional Chinese medicine. When digestive tract diseases occur, the characteristics of tongue such as tongue texture, fur, tooth marks and the like often show regular changes, and the changes can be captured and analyzed by naked eyes or images.
The inventors found that the following problems exist in the judgment of digestive tract diseases: 1) the existing auxiliary aid sections for detecting the digestive tract diseases are numerous and comprise blood sample detection, barium meal development, digestive endoscopy and the like, but many of the operations cause certain harm or discomfort to patients and are greatly limited in the using time and space range; 2) different types of digestive tract diseases have specific tongue characteristics, but the judgment of the digestive tract diseases according to the tongue characteristics is carried out on-site summary and conjecture according to the experience and knowledge of doctors in traditional Chinese medicine at present, the used time and space range is also limited, and the accuracy of the judgment of the digestive tract diseases can be influenced by human errors.
Disclosure of Invention
In order to solve the above problems, the present invention provides an auxiliary system for determining a disease of an alimentary tract based on a tongue image, which automatically determines a type of the disease of the alimentary tract by calculating a distance between a TF-IDF vector described in a tongue feature text corresponding to a current complete tongue image and a characteristic vector described in the tongue image corresponding to the disease of the alimentary tract according to a correspondence between tongue features of the tongue image and the diseases of the alimentary tract, without being affected by a time and a space range, and can improve convenience of determining the disease of the alimentary tract and assist a doctor in promoting improvement of diagnosis accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme:
a first aspect of the present invention provides a digestive tract disease determination assisting system based on a tongue image, including:
a tongue image acquisition section for acquiring a complete tongue image;
the tongue feature processing part is used for extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image;
and the digestive tract disease judging part is used for calculating the distance between the TF-IDF vector of the tongue feature text description corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases and recommending the digestive tract diseases, wherein k is a positive integer greater than or equal to 1.
A second aspect of the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
receiving a complete tongue image;
extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image;
and calculating the distance between the TF-IDF vector described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases, and recommending, wherein k is a positive integer greater than or equal to 1.
A third aspect of the invention provides a computer apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the program:
receiving a complete tongue image;
extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image;
and calculating the distance between the TF-IDF vector described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases, and recommending, wherein k is a positive integer greater than or equal to 1.
The invention has the beneficial effects that:
(1) the method is used for processing the complete tongue image, provides an accurate data basis for assisting in judging the type of the digestive tract disease, and can ensure the accuracy of the digestive tract disease judgment assisting system in judging the digestive tract disease;
(2) according to the method, the distances between the TF-IDF vectors described by the tongue feature text corresponding to the current complete tongue image and tongue image description feature vectors corresponding to various digestive tract diseases are calculated, the types of the digestive tract diseases are sorted according to the ascending order of the distances, the top k digestive tract diseases are screened out and recommended, the method is not influenced by time and space ranges, the types of the digestive tract diseases are automatically judged according to the corresponding relation between the tongue features of the tongue image and the various digestive tract diseases, the convenience of judging the digestive tract diseases is improved, and the accuracy of assisting doctors in diagnosis is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description, serve to explain the invention and not to limit the invention.
Fig. 1 is a schematic diagram of an digestive tract disease judgment assisting system based on tongue images according to an embodiment of the present invention;
FIG. 2 is a diagram of key points of a tongue image according to an embodiment of the present invention;
FIG. 3 is a flowchart of tongue keypoint detection and tongue detection using soft-MTCNN according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a PNet network according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an RNet network according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an ONet network according to an embodiment of the present invention;
FIG. 7 is a flowchart of a convolutional network and multi-layer LSTM based sentence generator generation tongue feature text description of an embodiment of the present invention;
FIG. 8 is a schematic diagram of a model structure of FastText according to an embodiment of the present invention;
fig. 9 is a specific process diagram of acquiring a type of a gastrointestinal disease corresponding to a corresponding gastrointestinal disease endoscopic diagnostic report according to an embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples.
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise, and it should be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of features, steps, operations, devices, components, and/or combinations thereof.
Interpretation of terms:
TF-IDF: term frequency-inverse document frequency;
the main idea of TF-IDF is: if a certain word or phrase appears frequently in one tongue image description, TF, and rarely appears in other tongue image descriptions, the word or phrase is considered to have good category distinguishing capability and is suitable for classification. TF-IDF is actually TF × IDF, where TF (term frequency) indicates the frequency of occurrence of the entry in the tongue image description; IDF (inverse Document frequency), the main idea is that if the tongue image description including the word is less, the discrimination of the word is larger, that is, the IDF is larger, k words with larger TF-IDF values are taken as feature vectors corresponding to the keywords described by the tongue image, and then TF-IDF vectors corresponding to the tongue image are obtained.
Example one
Fig. 1 shows the principle of the digestive tract disease judgment assisting system based on tongue image of the present embodiment, and as shown in fig. 1, the digestive tract disease judgment assisting system based on tongue image of the present embodiment includes:
(1) a tongue image acquisition section for acquiring a complete tongue image.
In order to ensure that the acquired tongue image is complete, the embodiment performs tongue key point detection and tongue detection by using an improved MTCNN network called soft-MTCNN, as shown in fig. 2, the embodiment labels 5 tongue key points, namely, a tongue root (2), a tongue waist (2), and a tongue tip (1), so as to accurately detect the complete tongue image by detecting the 5 key points of the tongue, and the qualified tongue image should be a detection frame capable of clearly seeing the 5 key points of the tongue and the tongue detection. The number of the tongue key points is five, and the five tongue key points are all positioned on the tongue contour line.
The soft-MTCNN network model is composed of a PNet network, an RNet network and an ONet network which are connected in series; the input quantity of the PNet network is a tongue image, and the output result is that the probability of tongue contained in the tongue image is judged and all possible tongue boundary frames are marked; the input quantity of the RNet network is the output result of the PNet network, and the output result of the RNet network is a tongue boundary frame with a tongue; the input quantity of the ONet network is the output quantity of the RNet network, and the output result of the ONet network is the key point and the position information of the key point.
As shown in fig. 3, the implementation flow of tongue key point detection and tongue detection by soft-MTCNN in this embodiment is as follows:
step 101: obtaining a candidate frame through a PNet network;
as shown in fig. 4, the image is scaled to 12 × 3 for the input tongue image, and the network Head has 3 branches for tongue classification (whether the prediction box contains a tongue), regression of the tongue frame and positioning of the tongue key point during training; during testing, the output of the step only has 4 pieces of coordinate information of the N bounding boxes and score, of course, the 4 pieces of coordinate information are modified by the output of the regression branch, and score can be regarded as the probability of classification and output tongue.
Step 102: deleting the candidate frames without tongues through the RNet network;
the input to this step is a P-Net generated bounding box cropped image, each bounding box having a size resize of 24 x 3, as shown in fig. 5. Similarly, the output of this step is only 4 coordinate information and score of the M bounding boxes during the test, and the 4 coordinate information is also corrected by the output of the regression branch.
Step 103: outputting the predicted boundary frame of the tongue and the positions of the 5 characteristic points by using the ONet network adjustment result, and automatically photographing based on the two information;
as shown in fig. 6, the input information of the ONet network is a bounding box output by R-Net, the size of the bounding box is adjusted to 48 × 3, 4 pieces of coordinate information including P bounding boxes, score and key point information are output, and if the collected image includes the bounding boxes and the key point information of 5 tongues, the image collecting device (for example, a mobile terminal camera device) is started to automatically take a picture.
(2) And the tongue feature processing part is used for extracting tongue features of the complete tongue image and generating a tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image.
As shown in fig. 7, the text description process of tongue Image generation changes such as tongue quality and tongue coating based on deep learning is implemented by using an Image capture technology, which is to automatically generate descriptive sentences from a picture, not only indicate objects contained in the picture, but also express the interrelation of the objects in the picture, their attributes and the activities that they participate in together. In the embodiment, an end-to-end training mode is adopted to unify the feature extraction and the text generation of the Image into an end-to-end model, and the latest EfficientNet-B7 is innovatively combined with the multi-layer LSTM to perform the Image Caption task of the tongue Image.
After the tongue image automatically shot by the mobile terminal is subjected to Resize operation of the image, the image is changed into 224 × 3, then the characteristics of the image are extracted based on EfficientNet-B7, and the tongue image is subjected to straightening operation and linear operation to be extracted as the characteristics of the tongue image. The acquired tongue image features are used as input for multi-layer LSTM, which predicts text descriptions such as: the tongue is pale white, the tongue is old and tender, the coating is thick, dry and moist, white and partially gathered. The specific structure of EfficientNet can be obtained according to the following formula of EfficientNet for correcting the functional Neural Networks, and the document provides a novel Model Scaling method which uses a simple but efficient compound coefficient (compound coefficient) to extend CNN in a more structured mode. And the efficiency-B7 is the network obtained after the baseline model is expanded.
The tongue image features include papilla, color, shape and coating.
Papilla of tongue: the surface of the tongue mucous membrane is provided with a plurality of small bulges, namely tongue papillae, the tongue papillae is divided into filamentous papillae, fungiform papillae, foliate papillae and contour papillae, the papillae are different in size and similar in shape, and the average volume, density, distribution condition and color of the tongue papillae can be changed under different disease conditions;
tongue color: the tongue itself can show different colors under different health conditions, and can have pale white, red, magenta and bluish purple colors, which reflect different disease states;
tongue shape: the overall tongue shape can be divided into old and tender, fat and thin, tooth marks, tongue sores and the like, and is mainly formed by the change reaction of the tongue structure;
coating the tongue: comprises tongue proper, tongue color, tongue type;
tongue coating: thickness (outer contour identification), dryness (reflection degree identification), corrosion (reflection continuity identification and nipple size identification);
the tongue coating is colored: white moss, yellow moss, black and grey moss;
moss type: the tongue coating distribution can be specifically identified, and the tongue coating can be of a whole tongue covering type, a local gathering type and a partial peeling type.
(3) And the digestive tract disease judging part is used for calculating the distance between the TF-IDF vector described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to ascending order of the distance, screening the top k digestive tract diseases and recommending the digestive tract diseases, wherein k is a positive integer greater than or equal to 1.
In a specific implementation, in the gastrointestinal disease determination unit, the calculation process of the tongue image description feature vector corresponding to each gastrointestinal disease is as follows:
obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases, generating a bag of words of the current tongue characteristic text description based on a loaded pre-database, further calculating TF-IDF vectors corresponding to all tongue characteristic text descriptions, further generating a document vector matrix, and clustering the document vector matrix after dimensionality reduction to obtain tongue image description characteristic vectors corresponding to various digestive tract diseases.
In a specific implementation, FastText is used as a text classification model for diagnostic reports, which combines the successful practice of Natural Language Processing (NLP) and machine learning, including the use of bags of words and n-gram token sentences, and the use of 'subword' information, and the use of hierarchical softmax by hiding tokens to share information between classes. The model structure of FastText is shown in FIG. 8.
Step 301: input of FastText (input)
As shown in fig. 8, x1,x2,...,xN-1,xNRepresenting n-gram vectors in a text, each feature is an average value of word vectors, because each word has a word vector, when the n-gram is used, the vectors of n words are averaged, for example, the vectors of three words of 'Huan', 'Ying', 'you' are averaged to obtain a word vector x of 'welcome you' wordkAs input for FastText.
Step 302: hidden layer
And averaging all the word vectors of the obtained n-grams in the hidden layer to obtain a final vector, wherein the situation is similar to that of the vector obtained in CBOW, the vector passes through a softmax layer, and only the common softmax different from CBOW is used in FastText.
Step 303: the output layer adopts layered softmax;
for data sets containing a large number of classes, FastText uses a hierarchical classifier (rather than a flat structure, the traditional softmax). Different categories are integrated into a tree structure, a FastText model uses a hierarchical softmax skill, the hierarchical softmax skill is established on the basis of Huffman coding, labels are coded, the number of model prediction targets can be greatly reduced, Fastext also utilizes the fact that the categories are unbalanced (the occurrence frequency of some categories is more than that of other categories), and the tree structure for representing the categories is established by using a Huffman algorithm, so that the depth of the tree structure with frequently appearing categories is smaller than the depth of the attribute structure with not frequently appearing categories, and the calculation efficiency is further improved.
In the digestive tract disease judging part, the process of acquiring tongue characteristic text descriptions corresponding to various digestive tract diseases comprises the following steps:
retrieving digestive tract disease endoscopic diagnosis reports from a database, wherein each digestive tract disease endoscopic diagnosis report is associated with a tongue characteristic text description;
and obtaining the types of the digestive tract diseases corresponding to the endoscopic diagnosis reports of the corresponding digestive tract diseases by using the text classification model of the diagnosis reports, and further obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases.
In a specific implementation, the digestive tract disease endoscopic diagnosis report is uploaded in the form of an image, and the text in the diagnosis report needs to be extracted through an OCR technology. As shown in fig. 9, in the present embodiment, an end-to-end recognition method is adopted to integrate text detection and text recognition in OCR into an end-to-end network, the text detection adopts a YOLO V2 frame with a full connection layer removed to fuse an RPN network, to obtain candidate text regions, and performs unified feature mapping of the text regions with different sizes into a feature sequence with a uniform height and a variable width through bilinear sampling, and the text recognition process adopts an RNN + CTC structure to obtain a recognized character string, and finally performs NMS (Non-maximum suppression) on the detected bounding box by using the recognized scoring condition, to obtain an accurate detection box.
Wherein, the YOLO V2 network for text detection is: the first 18 convolutional layers and 5 max pooling layers using YOLO V2, the last fully connected layer was removed and the final output signature was W/32H/32 1024 where W and H are the width and height of the input image.
The specific process for obtaining the type of the digestive tract disease corresponding to the corresponding digestive tract disease endoscope diagnosis report comprises the following steps:
step 401: acquiring a candidate detection frame;
the candidate frame (Region Proposals) is obtained by adopting an Anchor mechanism similar to RPN in fast R-CNN, regression parameters in the process are 5, except for the relative values of x, y, w and h of normal RPN regression and an angle value theta of a text, each Anchor comprises 14 Anchor Box, the size of the 14 Anchor Box is obtained through a training set through K-means clustering, the selection standard of positive and negative samples is that the Anchor Box and a sample corresponding to the maximum IoU of group Truth are positive samples, the rest are negative samples, and NMS is finally used as post-processing in the candidate detection frame.
Step 402: bilinear sampling;
bilinear Sampling (Bilinear Sampling) aims to uniformly fix feature maps corresponding to Region probes with different sizes obtained by previous text detection into feature maps with consistent height, the fixed height is required for inputting the feature maps into a recognized RNN, but the width is variable, and the feature is not deformed too much in the Bilinear Sampling process.
Step 403, text recognition;
the feature map obtained in step 402 can be decoded by RNN and CTC to obtain the text result of OCR recognition, in the embodiment of the present invention, the RNN structure adopts a convolution + pooling + recovery convolution + Batch Norm structure, and finally Softmax is connected, and the loss function adopts CTC loss.
In the embodiment, the distances between the TF-IDF vectors described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vectors corresponding to various digestive tract diseases are calculated, the digestive tract disease types are sorted according to the ascending order of the distances, the top k digestive tract diseases are screened out and recommended, the digestive tract disease types are not influenced by time and space ranges, the digestive tract disease types are automatically judged according to the corresponding relation between the tongue features of the tongue image and the various digestive tract diseases, and the accuracy of judging the digestive tract disease types is improved.
Example two
The present embodiment provides a computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs the steps of:
receiving a complete tongue image;
extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image;
and calculating the distance between the TF-IDF vector described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases, and recommending, wherein k is a positive integer greater than or equal to 1.
In specific implementation, the calculation process of tongue image description feature vectors corresponding to various digestive tract diseases is as follows:
obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases, generating a bag of words of the current tongue characteristic text description based on a loaded pre-database, further calculating TF-IDF vectors corresponding to all tongue characteristic text descriptions, further generating a document vector matrix, and clustering the document vector matrix after dimensionality reduction to obtain tongue image description characteristic vectors corresponding to various digestive tract diseases.
The process of obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases comprises the following steps:
retrieving digestive tract disease endoscopic diagnosis reports from a database, wherein each digestive tract disease endoscopic diagnosis report is associated with a tongue characteristic text description;
and obtaining the types of the digestive tract diseases corresponding to the endoscopic diagnosis reports of the corresponding digestive tract diseases by using the text classification model of the diagnosis reports, and further obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases.
In the embodiment, the distances between the TF-IDF vectors described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vectors corresponding to various digestive tract diseases are calculated, the digestive tract disease types are sorted according to the ascending order of the distances, the top k digestive tract diseases are screened out and recommended, the digestive tract disease types are not influenced by time and space ranges, the digestive tract disease types are automatically judged according to the corresponding relation between the tongue features of the tongue image and the various digestive tract diseases, and the accuracy of judging the digestive tract disease types is improved.
EXAMPLE III
The embodiment provides a computer device, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the following steps:
receiving a complete tongue image;
extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image;
and calculating the distance between the TF-IDF vector described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vector corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases, and recommending, wherein k is a positive integer greater than or equal to 1.
In specific implementation, the calculation process of tongue image description feature vectors corresponding to various digestive tract diseases is as follows:
obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases, generating a bag of words of the current tongue characteristic text description based on a loaded pre-database, further calculating TF-IDF vectors corresponding to all tongue characteristic text descriptions, further generating a document vector matrix, and clustering the document vector matrix after dimensionality reduction to obtain tongue image description characteristic vectors corresponding to various digestive tract diseases.
The process of obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases comprises the following steps:
retrieving digestive tract disease endoscopic diagnosis reports from a database, wherein each digestive tract disease endoscopic diagnosis report is associated with a tongue characteristic text description;
and obtaining the types of the digestive tract diseases corresponding to the endoscopic diagnosis reports of the corresponding digestive tract diseases by using the text classification model of the diagnosis reports, and further obtaining tongue characteristic text descriptions corresponding to various digestive tract diseases.
In the embodiment, the distances between the TF-IDF vectors described by the tongue feature text corresponding to the current complete tongue image and the tongue image description feature vectors corresponding to various digestive tract diseases are calculated, the digestive tract disease types are sorted according to the ascending order of the distances, the top k digestive tract diseases are screened out and recommended, the digestive tract disease types are not influenced by time and space ranges, the digestive tract disease types are automatically judged according to the corresponding relation between the tongue features of the tongue image and the various digestive tract diseases, and the accuracy of judging the digestive tract disease types is improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. An digestive tract disease judgment assistance system based on a tongue image, comprising:
a tongue image acquisition section for acquiring a complete tongue image;
the tongue feature processing part is used for extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image; tongue characteristics include papilla, color, shape, coating, texture, color and type of coating;
the digestive tract disease judging part is used for calculating the distance between a TF-IDF vector described by a tongue feature text corresponding to the current complete tongue image and tongue image description feature vectors corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening the top k digestive tract diseases and recommending the digestive tract diseases, wherein k is a positive integer greater than or equal to 1;
in the digestive tract disease judging part, the calculation process of the tongue image description feature vectors corresponding to various digestive tract diseases is as follows:
retrieving digestive tract disease endoscopic diagnosis reports from a database, wherein each digestive tract disease endoscopic diagnosis report is associated with a tongue characteristic text description; obtaining the type of the digestive tract disease corresponding to the endoscopic diagnosis report of the corresponding digestive tract disease by using the text classification model of the diagnosis report, and further obtaining tongue characteristic text description corresponding to various digestive tract diseases; generating a word bag of the current tongue characteristic text description based on the loaded prediction base, further calculating TF-IDF vectors corresponding to all tongue characteristic text descriptions, further generating a document vector matrix, and clustering the document vector matrix after dimensionality reduction to obtain tongue image description characteristic vectors corresponding to various digestive tract diseases;
in the tongue image acquisition part, detecting tongue key points in the tongue image by adopting a soft-MTCNN network model so as to ensure the integrity of the tongue image;
the soft-MTCNN network model is composed of a PNet network, an RNet network and an ONet network which are connected in series; the input quantity of the PNet network is a tongue image, and the output result is that the probability of tongue contained in the tongue image is judged and all possible tongue boundary frames are marked; the input quantity of the RNet network is the output result of the PNet network, and the output result of the RNet network is a tongue boundary frame with a tongue; the input quantity of the ONet network is the output quantity of the RNet network, and the output result of the ONet network is the key point and the position information of the key point;
the tongue image is constructed based on deep learning to generate tongue feature text description, and the process is as follows: after the tongue image is subjected to Resize operation of the image, feature extraction is carried out through EfficientNet-B7, the extracted features are subjected to straightening operation and linear operation and then serve as input of the multi-layer LSTM, and text description of tongue image features is output through the multi-layer LSTM.
2. The tongue image-based digestive tract disease assessment support system according to claim 1, wherein the digestive tract disease endoscopic diagnosis report is stored in the database in the form of an image.
3. The tongue image-based digestive tract disease assessment support system according to claim 1, wherein the number of the tongue key points is five, each of which is located on the tongue contour line, and the five tongue key points are composed of two tongue root key points, two tongue waist key points and one tongue tip key point.
4. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, carries out the steps of:
receiving a complete tongue image;
extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image; tongue characteristics include papilla, color, shape, coating, texture, color and type of coating;
calculating the distance between a TF-IDF vector described by a tongue feature text corresponding to the current complete tongue image and tongue image description feature vectors corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening and recommending the top k digestive tract diseases, wherein k is a positive integer greater than or equal to 1;
the calculation process of tongue image description feature vectors corresponding to various digestive tract diseases comprises the following steps:
retrieving digestive tract disease endoscopic diagnosis reports from a database, wherein each digestive tract disease endoscopic diagnosis report is associated with a tongue characteristic text description; obtaining the type of the digestive tract disease corresponding to the endoscopic diagnosis report of the corresponding digestive tract disease by using the text classification model of the diagnosis report, and further obtaining tongue characteristic text description corresponding to various digestive tract diseases; generating a word bag of the current tongue characteristic text description based on the loaded prediction base, further calculating TF-IDF vectors corresponding to all tongue characteristic text descriptions, further generating a document vector matrix, and clustering the document vector matrix after dimensionality reduction to obtain tongue image description characteristic vectors corresponding to various digestive tract diseases;
detecting tongue key points in the tongue image by adopting a soft-MTCNN network model to ensure the integrity of the tongue image;
the soft-MTCNN network model is composed of a PNet network, an RNet network and an ONet network which are connected in series; the input quantity of the PNet network is a tongue image, and the output result is that the probability of tongue contained in the tongue image is judged and all possible tongue boundary frames are marked; the input quantity of the RNet network is the output result of the PNet network, and the output result of the RNet network is a tongue boundary frame with a tongue; the input quantity of the ONet network is the output quantity of the RNet network, and the output result of the ONet network is the key point and the position information of the key point;
the tongue image is constructed based on deep learning to generate tongue feature text description, and the process is as follows: after the tongue image is subjected to Resize operation of the image, feature extraction is carried out through EfficientNet-B7, the extracted features are subjected to straightening operation and linear operation and then serve as input of the multi-layer LSTM, and text description of tongue image features is output through the multi-layer LSTM.
5. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of:
receiving a complete tongue image;
extracting tongue features of the complete tongue image and generating tongue feature text description, generating a word bag of the current tongue feature text description based on the loaded prediction library, and calculating TF-IDF vectors of the tongue feature text description corresponding to the corresponding tongue image; tongue characteristics include papilla, color, shape, coating, texture, color and type of coating;
calculating the distance between a TF-IDF vector described by a tongue feature text corresponding to the current complete tongue image and tongue image description feature vectors corresponding to various digestive tract diseases, sorting the types of the digestive tract diseases according to the ascending order of the distance, screening and recommending the top k digestive tract diseases, wherein k is a positive integer greater than or equal to 1;
the calculation process of tongue image description feature vectors corresponding to various digestive tract diseases comprises the following steps:
retrieving digestive tract disease endoscopic diagnosis reports from a database, wherein each digestive tract disease endoscopic diagnosis report is associated with a tongue characteristic text description; obtaining the type of the digestive tract disease corresponding to the endoscopic diagnosis report of the corresponding digestive tract disease by using the text classification model of the diagnosis report, and further obtaining tongue characteristic text description corresponding to various digestive tract diseases; generating a word bag of the current tongue characteristic text description based on the loaded prediction base, further calculating TF-IDF vectors corresponding to all tongue characteristic text descriptions, further generating a document vector matrix, and clustering the document vector matrix after dimensionality reduction to obtain tongue image description characteristic vectors corresponding to various digestive tract diseases;
detecting tongue key points in the tongue image by adopting a soft-MTCNN network model to ensure the integrity of the tongue image;
the soft-MTCNN network model is composed of a PNet network, an RNet network and an ONet network which are connected in series; the input quantity of the PNet network is a tongue image, and the output result is that the probability of tongue contained in the tongue image is judged and all possible tongue boundary frames are marked; the input quantity of the RNet network is the output result of the PNet network, and the output result of the RNet network is a tongue boundary frame with a tongue; the input quantity of the ONet network is the output quantity of the RNet network, and the output result of the ONet network is the key point and the position information of the key point;
the tongue image is constructed based on deep learning to generate tongue feature text description, and the process is as follows: after the tongue image is subjected to Resize operation of the image, feature extraction is carried out through EfficientNet-B7, the extracted features are subjected to straightening operation and linear operation and then serve as input of the multi-layer LSTM, and text description of tongue image features is output through the multi-layer LSTM.
CN202010108365.1A 2020-02-21 2020-02-21 Digestive tract disease judgment auxiliary system based on tongue image Active CN111341437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010108365.1A CN111341437B (en) 2020-02-21 2020-02-21 Digestive tract disease judgment auxiliary system based on tongue image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010108365.1A CN111341437B (en) 2020-02-21 2020-02-21 Digestive tract disease judgment auxiliary system based on tongue image

Publications (2)

Publication Number Publication Date
CN111341437A CN111341437A (en) 2020-06-26
CN111341437B true CN111341437B (en) 2022-02-11

Family

ID=71185344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010108365.1A Active CN111341437B (en) 2020-02-21 2020-02-21 Digestive tract disease judgment auxiliary system based on tongue image

Country Status (1)

Country Link
CN (1) CN111341437B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112002415B (en) * 2020-08-23 2024-03-01 吾征智能技术(北京)有限公司 Intelligent cognitive disease system based on human excrement
CN112786201A (en) * 2021-01-24 2021-05-11 武汉东湖大数据交易中心股份有限公司 Hand form cognition-based health prediction model construction method and device
CN112949168A (en) * 2021-02-04 2021-06-11 复旦大学附属中山医院 Method for establishing real-time position positioning model of upper digestive tract under endoscope
CN113241184B (en) * 2021-06-24 2022-07-29 华侨大学 Auxiliary diagnosis model for children pneumonia and training method thereof
CN116784827B (en) * 2023-02-14 2024-02-06 安徽省儿童医院 Digestive tract ulcer depth and area measuring and calculating method based on endoscope

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451278A (en) * 2017-08-07 2017-12-08 北京工业大学 Chinese Text Categorization based on more hidden layer extreme learning machines
CN107463786A (en) * 2017-08-17 2017-12-12 王卫鹏 Medical image Knowledge Base based on structured report template
CN108647203A (en) * 2018-04-20 2018-10-12 浙江大学 A kind of computational methods of Chinese medicine state of an illness text similarity
CN108763576A (en) * 2018-05-28 2018-11-06 大连理工大学 A kind of parallel k-means algorithms for higher-dimension text data
CN108986907A (en) * 2018-07-24 2018-12-11 郑州大学第附属医院 A kind of tele-medicine based on KNN algorithm divides the method for examining automatically
CN108986912A (en) * 2018-07-12 2018-12-11 北京三医智慧科技有限公司 Chinese medicine stomach trouble tongue based on deep learning is as information intelligent processing method
CN109299239A (en) * 2018-09-29 2019-02-01 福建弘扬软件股份有限公司 ES-based electronic medical record retrieval method
CN109977422A (en) * 2019-04-18 2019-07-05 中国石油大学(华东) A kind of case history key message extraction model based on participle technique
CN110619319A (en) * 2019-09-27 2019-12-27 北京紫睛科技有限公司 Improved MTCNN model-based face detection method and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107247774A (en) * 2017-06-08 2017-10-13 西北工业大学 A kind of processing method and system towards gunz multi-modal data
CN107392238B (en) * 2017-07-12 2021-05-04 华中师范大学 Outdoor plant knowledge expansion learning system based on mobile visual search
CN107633156B (en) * 2017-10-13 2018-09-18 合肥工业大学 Endoscopy intelligent decision support system for minimally-invasive treatment
CN108171243B (en) * 2017-12-18 2021-07-30 广州七乐康药业连锁有限公司 Medical image information identification method and system based on deep neural network
CN108710690A (en) * 2018-05-22 2018-10-26 长春师范大学 Medical image search method based on geometric verification
US10678830B2 (en) * 2018-05-31 2020-06-09 Fmr Llc Automated computer text classification and routing using artificial intelligence transfer learning
CN109637669B (en) * 2018-11-22 2023-07-18 中山大学 Deep learning-based treatment scheme generation method, device and storage medium
CN110399798B (en) * 2019-06-25 2021-07-20 朱跃飞 Discrete picture file information extraction system and method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451278A (en) * 2017-08-07 2017-12-08 北京工业大学 Chinese Text Categorization based on more hidden layer extreme learning machines
CN107463786A (en) * 2017-08-17 2017-12-12 王卫鹏 Medical image Knowledge Base based on structured report template
CN108647203A (en) * 2018-04-20 2018-10-12 浙江大学 A kind of computational methods of Chinese medicine state of an illness text similarity
CN108763576A (en) * 2018-05-28 2018-11-06 大连理工大学 A kind of parallel k-means algorithms for higher-dimension text data
CN108986912A (en) * 2018-07-12 2018-12-11 北京三医智慧科技有限公司 Chinese medicine stomach trouble tongue based on deep learning is as information intelligent processing method
CN108986907A (en) * 2018-07-24 2018-12-11 郑州大学第附属医院 A kind of tele-medicine based on KNN algorithm divides the method for examining automatically
CN109299239A (en) * 2018-09-29 2019-02-01 福建弘扬软件股份有限公司 ES-based electronic medical record retrieval method
CN109977422A (en) * 2019-04-18 2019-07-05 中国石油大学(华东) A kind of case history key message extraction model based on participle technique
CN110619319A (en) * 2019-09-27 2019-12-27 北京紫睛科技有限公司 Improved MTCNN model-based face detection method and system

Also Published As

Publication number Publication date
CN111341437A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111341437B (en) Digestive tract disease judgment auxiliary system based on tongue image
CN108364006B (en) Medical image classification device based on multi-mode deep learning and construction method thereof
CN110600122B (en) Digestive tract image processing method and device and medical system
Guo et al. Classification of thyroid ultrasound standard plane images using ResNet-18 networks
CN109858540B (en) Medical image recognition system and method based on multi-mode fusion
CN110335241B (en) Method for automatically scoring intestinal tract preparation after enteroscopy
CN111275118B (en) Chest film multi-label classification method based on self-correction type label generation network
CN117274270B (en) Digestive endoscope real-time auxiliary system and method based on artificial intelligence
CN110796670A (en) Dissection method and device for dissecting artery
CN111430025B (en) Disease diagnosis model training method based on medical image data augmentation
CN115578783B (en) Device and method for identifying eye diseases based on eye images and related products
Sun et al. A novel gastric ulcer differentiation system using convolutional neural networks
CN113763360A (en) Digestive endoscopy simulator inspection quality assessment method and system
CN111462082A (en) Focus picture recognition device, method and equipment and readable storage medium
CN111798408A (en) Endoscope interference image detection and grading system and method
CN117218127B (en) Ultrasonic endoscope auxiliary monitoring system and method
CN112863699B (en) ESD preoperative discussion system based on mobile terminal
CN110491519A (en) A kind of method of inspection of medical data
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN112668668B (en) Postoperative medical image evaluation method and device, computer equipment and storage medium
CN115170492A (en) Intelligent prediction and evaluation system for postoperative vision of cataract patient based on AI (artificial intelligence) technology
CN114612381A (en) Medical image focus detection algorithm with scale enhancement and attention fusion
CN113256625A (en) Electronic equipment and recognition device
Wen et al. FLeak-Seg: Automated fundus fluorescein leakage segmentation via cross-modal attention learning
Obukhova et al. Two-stage method for polyps segmentation in endoscopic images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zuo Xiuli

Inventor after: Zhou Jiawei

Inventor after: Feng Jian

Inventor after: Li Yanqing

Inventor after: Li Zhen

Inventor after: Shao Xuejun

Inventor after: Ji Rui

Inventor after: Yang Xiaoyun

Inventor before: Zuo Xiuli

Inventor before: Zhou Jiawei

Inventor before: Feng Jian

Inventor before: Li Yanqing

Inventor before: Li Zhen

Inventor before: Shao Xuejun

Inventor before: Ji Rui

Inventor before: Yang Xiaoyun

GR01 Patent grant
GR01 Patent grant