CN113705685A - Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment - Google Patents

Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment Download PDF

Info

Publication number
CN113705685A
CN113705685A CN202111003735.6A CN202111003735A CN113705685A CN 113705685 A CN113705685 A CN 113705685A CN 202111003735 A CN202111003735 A CN 202111003735A CN 113705685 A CN113705685 A CN 113705685A
Authority
CN
China
Prior art keywords
feature
disease
prediction
local
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111003735.6A
Other languages
Chinese (zh)
Other versions
CN113705685B (en
Inventor
刘海伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111003735.6A priority Critical patent/CN113705685B/en
Publication of CN113705685A publication Critical patent/CN113705685A/en
Application granted granted Critical
Publication of CN113705685B publication Critical patent/CN113705685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, is applied to the field of intelligent medical treatment, in order to promote the construction of the intelligent city, disclose a disease characteristic recognition model training, disease characteristic recognition method, apparatus, computer equipment and storage medium, said method is through inputting the human face picture of sample into preserving and discerning the model, get and predict global characteristic label, predict local characteristic label and predict and supervise the characteristic label; determining a total loss value of a preset identification model according to the determined first prediction loss value, the second prediction loss value and the third prediction loss value and the obtained first prediction weight, the second prediction weight and the third prediction weight; and when the total loss value does not reach the preset convergence condition, iteratively updating the initial parameters in the preset identification model until the total loss value reaches the convergence condition, and recording the converged preset identification model as a disease characteristic identification model. The invention improves the efficiency and accuracy of model training and improves the accuracy of feature recognition.

Description

Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment
Technical Field
The invention relates to the technical field of classification models, in particular to a disease feature recognition model training method, a disease feature recognition device, computer equipment and a storage medium.
Background
With the improvement of scientific technology, the medical technology is improved, and different medical symptoms can be checked through different medical instruments. At present, some diseases such as hyperthyroidism, down syndrome and the like have obvious signs on the body (such as the face, the neck, the skin and the like), so that early warning can be provided for a user through the signs on the body, and the disease is prevented from getting worse.
In the prior art, whether the above characteristics appear is often checked through a manual detection mode, but the method has the following defects: the characteristic recognition efficiency is low, and the characteristic recognition can be carried out only by professional personnel, otherwise, the phenomenon of recognition error is easy to occur; the labor cost is high, and the method is not beneficial to identifying and detecting a large number of features.
Disclosure of Invention
The embodiment of the invention provides a disease feature recognition model training method, a disease feature recognition device, computer equipment and a storage medium, and aims to solve the problems of low feature recognition efficiency and high error rate.
A disease feature recognition model training method comprises the following steps:
acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; associating a target disease feature label with one of the sample face images;
inputting the sample face image into a preset identification model containing initial parameters, and carrying out disease feature identification on the sample face image through the preset identification model to obtain a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
determining a first prediction loss value of the preset identification model according to the prediction global feature label and the target disease feature label; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label;
acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
determining a total loss value of the preset identification model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value and the third prediction weight;
and when the total loss value does not reach a preset convergence condition, iteratively updating the initial parameters in the preset identification model until the total loss value reaches the convergence condition, and recording the preset identification model after convergence as a disease feature identification model.
A method of disease feature identification, comprising:
acquiring an image to be identified;
inputting the image to be recognized into a disease feature recognition model, and performing disease feature recognition on the image to be recognized through the disease feature recognition model to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized; the disease feature recognition model is obtained according to the disease feature recognition model training method;
and determining a disease feature identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
A disease feature recognition model training device comprises:
the system comprises a sample face image acquisition module, a face image acquisition module and a face image acquisition module, wherein the sample face image acquisition module is used for acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; associating a target disease feature label with one of the sample face images;
the disease feature prediction module is used for inputting the sample face image into a preset recognition model containing initial parameters so as to perform disease feature recognition on the sample face image through the preset recognition model to obtain a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
a loss value determination module, configured to determine a first predicted loss value of the preset identification model according to the predicted global feature tag and the target disease feature tag; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label;
the prediction weight acquisition module is used for acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
a total loss value obtaining module, configured to determine a total loss value of the preset identification model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value, and the third prediction weight;
and the recognition model training module is used for iteratively updating the initial parameters in the preset recognition model when the total loss value does not reach a preset convergence condition, and recording the preset recognition model after convergence as a disease feature recognition model until the total loss value reaches the convergence condition.
A disease feature identification apparatus comprising:
the image to be recognized acquisition module is used for acquiring an image to be recognized;
the disease feature identification module is used for inputting the image to be identified into a disease feature identification model so as to perform disease feature identification on the image to be identified through the disease feature identification model, and obtain a global disease classification result, a local disease classification result and a supervision disease classification result corresponding to the image to be identified; the disease feature recognition model is obtained according to the disease feature recognition model training method;
and the recognition result determining module is used for determining a disease feature recognition result corresponding to the image to be recognized according to the global disease classification result, the local disease classification result and the supervised disease classification result.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the above-mentioned disease feature recognition model training method when executing the computer program, or the processor implementing the above-mentioned disease feature recognition method when executing the computer program.
A computer-readable storage medium, which stores a computer program that when executed by a processor implements the above-described disease feature recognition model training method, or that when executed by a processor implements the above-described disease feature recognition method.
In the method, three feature recognition and discrimination networks (corresponding to the predicted global feature label, the predicted local feature label and the predicted supervision feature label) are arranged in a preset recognition model, so that the generated predicted local feature label can solve the problem of low accuracy when training is carried out only by predicting the global feature label, and better pay attention to specific feature information (such as eye feature information during hyperthyroidism); furthermore, a prediction supervision feature label is introduced, the prediction supervision feature label can supervise a prediction local feature label, the accuracy of the prediction local feature label is improved, and further the efficiency and accuracy of model training are improved, so that the accuracy of subsequent disease feature recognition on a face image through a disease feature recognition model completed through the training is obviously improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a disease feature recognition model training method or a disease feature recognition method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a disease feature recognition model training method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a disease feature recognition model training method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a disease feature identification method according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a training apparatus for a disease feature recognition model according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a disease signature recognition apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a computer device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to the disease feature recognition model training method provided by the embodiment of the invention, the disease feature recognition model training method can be applied to the application environment shown in fig. 1. Specifically, the disease feature recognition model training method is applied to a disease feature recognition model training system, the disease feature recognition model training system comprises a client and a server shown in fig. 1, and the client and the server are in communication through a network and used for solving the problems of low feature recognition efficiency and high error rate. The client is also called a user side, and refers to a program corresponding to the server and providing local services for the client. The client may be installed on, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for training a disease feature recognition model is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
s10: acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; one of the sample face images is associated with a target disease feature label.
It is understood that the preset sample face data set can be obtained by crawling from different websites or a face image database through a crawler technology. The sample face images are face images of different individuals, or face images of the same individual at different periods (such as normal face images and pathological face images). The target disease feature label represents a disease feature of the sample face image corresponding to the target disease feature label, for example, the target disease feature label is a hyperthyroidism disease feature label, an eye blindness disease feature, a leprosy disease feature label or a no-disease feature label (i.e., a normal face label), and the target disease feature label can be obtained by manual labeling such as by a doctor in advance.
S20: inputting the sample face image into a preset identification model containing initial parameters, and carrying out disease feature identification on the sample face image through the preset identification model to obtain a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image.
It can be understood that the preset recognition model is a model for performing disease feature recognition on the sample face image, and the preset recognition model includes three feature recognition and discrimination networks: global convolutional networks, local convolutional networks, and segmented pooled networks. The global convolutional network is used for extracting all feature information (such as eye feature information, cheek feature information, lip feature information and the like) in the sample face image and generating a prediction global feature label according to all the extracted feature information; the local convolution network is used for extracting specific feature information in the sample face image (for example, when judging whether the sample face image contains hyperthyroidism features, the local convolution network extracts eye feature information in the sample face image), and a prediction local feature label is generated according to the extracted specific feature information; the segmented pooling network is used for supervising the specific characteristic information extracted from the local convolutional network so as to supervise the predicted local characteristic label through the generated predicted supervised characteristic label, and further improve the accuracy of extracting the local convolutional network characteristic and the accuracy of label generation.
Further, the predicted global feature tag is generated by extracting all feature information of the sample face image through the global convolutional network, and the predicted global feature tag represents the disease feature of the sample face image (for example, when judging whether the sample face image contains the hyperthyroidism feature, the predicted global feature tag may represent that the sample face image contains the hyperthyroidism feature, or represent that the sample face image does not contain the hyperthyroidism feature). Similarly, the predicted local feature label is generated by extracting specific feature information of the sample face image through a local convolution network, and the predicted local feature label represents the disease feature of the sample face image; the prediction supervision feature tag is generated after supervision is carried out on the specific feature information extracted according to the local convolutional network, and the prediction supervision feature tag also represents the disease features of the sample face image.
S30: determining a first prediction loss value of the preset identification model according to the prediction global feature label and the target disease feature label; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; and determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label.
The target disease feature label is obtained through manual labeling in advance, and the preset recognition model needs to be trained through the sample face image, so that the predicted global feature label, the predicted local feature label and the predicted supervised feature label output in the preset recognition model may be inaccurate, and therefore a first predicted loss value of the preset recognition model can be determined according to the predicted global feature label and the target disease feature label; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; and determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label.
S40: and acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag.
Optionally, in this embodiment, the sum of the first prediction weight and the second prediction weight is 1, and the third prediction weight is set to 0.1. It can be understood that the first prediction weight, the second prediction weight and the third prediction weight are pre-assigned to the corresponding labels, so that the distribution of the weights among the predicted global feature label, the predicted local feature label and the predicted supervised feature label is different, and the accuracy of model training can be improved.
S50: and determining the total loss value of the preset identification model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value and the third prediction weight.
Specifically, after a first prediction weight corresponding to the predicted global feature label, a second prediction weight corresponding to the predicted local feature label, and a third prediction weight corresponding to the predicted supervised feature label are acquired, the product of the first prediction loss value and the first prediction weight is determined as a global loss value; determining a product of the second prediction loss value and the second prediction weight as a local loss value; determining a product of the third prediction loss value and the third prediction weight as a supervised loss value; recording the sum of the global loss value, the local loss value, and the supervisory loss value as the total loss value.
S60: and when the total loss value does not reach a preset convergence condition, iteratively updating the initial parameters in the preset identification model until the total loss value reaches the convergence condition, and recording the preset identification model after convergence as a disease feature identification model.
It is understood that the convergence condition may be a condition that the total loss value is smaller than the set threshold, that is, when the total loss value is smaller than the set threshold, the training is stopped; the convergence condition may also be a condition that the total loss value is small and does not decrease after 10000 times of calculation, that is, when the total loss value is small and does not decrease after 10000 times of calculation, the training is stopped, and the preset recognition model after convergence is recorded as the disease feature recognition model.
Further, after determining the total loss value of the preset recognition model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value and the third prediction weight, when the total loss value does not reach a preset convergence condition, adjusting an initial parameter of the preset recognition model according to the total loss value, and re-inputting the sample face image into the preset recognition model after adjusting the initial parameter, so as to select another sample face image in the preset sample face data set when the total loss value of the sample face image reaches the preset convergence condition, and executing the steps S20 to S50 to obtain the total loss value corresponding to the sample face image, and when the total loss value does not reach the preset convergence condition, re-adjusting the initial parameter of the preset recognition model according to the total loss value, and enabling the total loss value of the sample face image to reach a preset convergence condition.
Therefore, after the preset recognition model is trained through all the sample face images in the preset sample face data set, the result output by the preset recognition model can be continuously drawn to an accurate result, the recognition accuracy is higher and higher, and the preset recognition model after convergence is recorded as a disease feature recognition model until the total loss value of all the sample face images reaches a preset convergence condition.
In this embodiment, three feature recognition and discrimination networks (corresponding to the predicted global feature tag, the predicted local feature tag, and the predicted supervision feature tag) are set in a preset recognition model, so that the generated predicted local feature tag can solve the problem of low accuracy when training is performed only by predicting the global feature tag, and better pay attention to specific feature information (such as eye feature information during hyperthyroidism); furthermore, a prediction supervision feature label is introduced, the prediction supervision feature label can supervise a prediction local feature label, the accuracy of the prediction local feature label is improved, and further the efficiency and accuracy of model training are improved, so that the accuracy of subsequent disease feature recognition on a face image through a disease feature recognition model completed through the training is obviously improved.
In an embodiment, as shown in fig. 3, in step S20, that is, the inputting the sample face image into a preset recognition model including initial parameters to perform disease feature recognition on the sample face image through the preset recognition model, so as to obtain a predicted global feature tag, a predicted local feature tag, and a predicted supervised feature tag corresponding to the sample face image includes:
s201: carrying out convolution processing on the sample face image through a global convolution network of the preset identification model to obtain an intermediate convolution characteristic and the global characteristic label;
it is understood that the global convolutional network is a convolutional neural network, and the global convolutional network may adopt a convolutional network such as a Resnet-50 structure. The intermediate convolution characteristic is a characteristic output by other layers before the last convolutional layer in the global convolution network, and if the global convolution network in this embodiment is assumed to be five convolutional layers, a characteristic output by the fourth convolutional layer is the intermediate convolution characteristic. The global feature tag is generated according to features output by the last layer of the global convolutional network. Further, different channels of the intermediate convolution features contain feature information of different positions in the sample face image.
S202: inputting the intermediate convolution features into a local convolution network in the preset identification model to obtain local convolution features corresponding to the intermediate convolution features and the predicted local feature labels;
it is understood that the local convolutional network in this embodiment includes one layer of convolutional layer with a convolutional kernel of 1 × 1, which is different from the last layer of convolutional layer in the global convolutional network. If the global convolutional network comprises five convolutional layers, accessing a fourth convolutional layer in the global convolutional network into a local convolutional network, so as to realize feature recognition of intermediate convolutional features output by the fourth convolutional layer, obtain local convolutional features, and further generate a predicted local feature label according to the local convolutional features.
S203: and inputting the local convolution characteristics into a segmented pooling network in the preset identification model to obtain the prediction supervision characteristic label.
It can be understood that the segment pooling network in this embodiment includes a segment pooling layer, and the segment pooling layer may perform class supervision on the local convolution feature, that is, for example, a predicted local feature tag identified in the local convolution network may be inaccurate, and the segment pooling network may perform supervision branches to generate a predicted supervised feature tag and perform feature verification on the predicted local feature tag.
In an embodiment, in step S201, that is, performing convolution processing on the sample face image through the global convolution network of the preset identification model to obtain an intermediate convolution feature and the global feature tag, includes:
carrying out convolution processing on the sample face image through the global convolution network to obtain the intermediate convolution characteristics output by an intermediate convolution layer of the global convolution network;
it should be understood that the above description indicates that the intermediate convolution feature is the feature output by the other layers before the last convolutional layer in the global convolution network, and if the global convolution network in this embodiment is assumed to be five convolutional layers, the feature output by the fourth convolutional layer is the intermediate convolution feature, so the fourth convolutional layer here is the intermediate convolutional layer.
Inputting the intermediate convolution characteristics to an output convolution layer of the global convolution network to obtain global convolution characteristics output by the output convolution layer;
for example, in the above description, it is pointed out that, assuming that the global convolutional network in this embodiment is five convolutional layers, the output characteristic of the fourth convolutional layer is an intermediate convolutional characteristic, so that the fourth convolutional layer is an intermediate convolutional layer, and the fifth convolutional layer is an output convolutional layer, that is, the output convolutional layer is the last convolutional layer in the global convolutional network; the output convolution layer is a bottleeck structure.
And inputting the global convolution characteristics to a global full-connection layer of the global convolution network to obtain the global characteristic label.
It is understood that the global connection layer in this embodiment includes an activation function layer and a full connection layer for classification. And then inputting the global convolution characteristics into a full connection layer in the global convolution network for classification, and further inputting the classification result into an activation function layer to obtain a global characteristic label. Further, the global feature label is the probability that the sample face image contains the disease feature, and the probability represents that the sample face image contains the disease feature when the probability is greater than or equal to a preset threshold value; if the probability is smaller than a preset threshold value, the sample face image is characterized to not contain the disease characteristics.
In an embodiment, in step S202, that is, the inputting the intermediate convolution feature into the local convolution network in the preset identification model to obtain the local convolution feature corresponding to the intermediate convolution feature and the predicted local feature tag includes:
performing local feature extraction on the intermediate convolution features through local convolution layers in the local convolution network to obtain local convolution features;
it is to be understood that the local convolutional layer indicated in this embodiment is different from the output convolutional layer in the global convolutional network in the above step, and alternatively, the local convolutional layer may be a convolutional layer using a 1 × 1 convolutional kernel.
And inputting the local convolution characteristics to a local full-connection layer of the local convolution network to obtain the predicted local characteristic label.
The local fully-connected layer in this embodiment also includes an activation function layer and a fully-connected layer for classification.
It will be appreciated that the above description indicates that different channels of the intermediate convolution features contain feature information of different positions in the sample face image, and in order to learn to extract valid information in the intermediate convolution features. In this embodiment, feature maps of N channels in the intermediate convolution feature are pooled into N-dimensional vectors by one of the local fully-connected layers of the local convolution network, disease feature classification is performed according to the N-dimensional vectors by the other fully-connected layer, and finally, a classification result is input to the activation function layer to obtain a predicted local feature label.
In an embodiment, the step S203, that is, inputting the local convolution feature into the segmented pooling network in the preset identification model to obtain the predictive supervised feature tag includes:
carrying out average pooling on the local convolution characteristics through a segmented pooling layer in the segmented pooling network to obtain at least one pooling characteristic;
in order to make the accuracy of predicting the local feature labels higher, in this embodiment, the average pooling process is performed on the intermediate convolution features through the segment pooling layer in the segment pooling network. For example, it is assumed in this embodiment that it is necessary to determine whether the sample face image includes a hyperthyroidism feature, so that the final predicted global feature tag, predicted local feature tag, or predicted supervised feature tag in this embodiment may only correspond to two categories, one of which is that the sample face image includes a hyperthyroidism feature, and the other is that the sample face image does not include a hyperthyroidism feature. Further, in the above description, it is pointed out that the local convolution features are feature maps of N channels, and correspond to N-dimensional vectors, so that each k features in the N-dimensional vectors are averaged and pooled (N ═ kC) according to the classification number C of disease features (i.e. two types including hyperthyroidism features and not including hyperthyroidism features as described above), that is, each row vector in the N-dimensional vectors is divided into two segments, one segment includes k features, and further, the average pooling processing is implemented, so as to obtain at least one pooled feature.
And inputting each pooling feature into a supervision full-connection layer in the segmented pooling network to obtain the prediction supervision feature label.
Specifically, after the local convolution features are subjected to average pooling processing through a segmented pooling layer in the segmented pooling network to obtain at least one pooled feature, each pooled feature is input to a supervision full-link layer in the segmented pooling network to obtain the prediction supervision feature tag.
In one embodiment, as shown in fig. 4, a method for identifying disease features is provided, which includes:
s70: acquiring an image to be identified;
optionally, the image to be recognized may be a face image of the user captured by an image capturing device (such as a camera, etc.), or may be a face image autonomously transmitted by the user.
S80: inputting the image to be recognized into a disease feature recognition model, and performing disease feature recognition on the image to be recognized through the disease feature recognition model to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized; the disease feature recognition model is obtained according to the disease feature recognition model training method;
it can be understood that, in the above model training process, the predicted global feature label, the predicted local feature label and the predicted supervised feature label are obtained, so that after the model training is completed, the disease feature recognition model directly obtains the global disease classification result, the local disease classification result and the supervised disease classification result corresponding to the image to be recognized, corresponding to the above training process.
S90: and determining a disease feature identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
It is understood that, in the training process, there are a first prediction weight corresponding to the pre-stored global feature label, a second prediction weight corresponding to the predicted local feature label, and a third prediction weight corresponding to the predicted supervised feature label, so that after the model training is completed, that is, in the disease feature recognition model, there are also a first classification weight corresponding to the global disease classification result, a second classification weight corresponding to the local disease classification result, and a third classification weight corresponding to the supervised disease classification result. That is, the first classification weight is a first prediction weight after the model training is completed, the second classification weight is a second prediction weight after the model training is completed, and the third classification weight is a third prediction weight after the model training is completed, and the weight values may be the same or different. The sum of the first classification weight and the second classification weight is 1.
Specifically, after the image to be recognized is input to a disease feature recognition model, and disease feature recognition is performed on the image to be recognized through the disease feature recognition model, so that a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized are obtained, a disease feature recognition result is determined according to a first classification weight, the global disease classification result, a second classification weight, the local disease classification result, a third classification weight and the supervised disease classification result, and the disease feature recognition result represents whether the image to be recognized contains corresponding disease features.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a training device for a disease feature recognition model is provided, and the training device for the disease feature recognition model is in one-to-one correspondence with the training method for the disease feature recognition model in the above embodiment. As shown in fig. 5, the disease feature recognition model training apparatus includes a sample face image acquisition module 10, a disease feature prediction module 20, a loss value determination module 30, a prediction weight acquisition module 40, a total loss value acquisition module 50, and a recognition model training module 60. The functional modules are explained in detail as follows:
a sample face image obtaining module 10, configured to obtain a preset sample face data set; the preset sample face data set comprises at least one sample face image; associating a target disease feature label with one of the sample face images;
a disease feature prediction module 20, configured to input the sample face image into a preset identification model containing initial parameters, so as to perform disease feature recognition on the sample face image through the preset identification model, and obtain a predicted global feature tag, a predicted local feature tag, and a predicted supervised feature tag corresponding to the sample face image;
a loss value determining module 30, configured to determine a first predicted loss value of the preset identification model according to the predicted global feature tag and the target disease feature tag; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label;
a prediction weight obtaining module 40, configured to obtain a first prediction weight corresponding to the predicted global feature tag, a second prediction weight corresponding to the predicted local feature tag, and a third prediction weight corresponding to the predicted supervised feature tag;
a total loss value obtaining module 50, configured to determine a total loss value of the preset identification model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value, and the third prediction weight;
and the recognition model training module 60 is configured to iteratively update the initial parameters in the preset recognition model when the total loss value does not reach a preset convergence condition, and record the preset recognition model after convergence as a disease feature recognition model until the total loss value reaches the convergence condition.
For the specific definition of the training apparatus for the disease feature recognition model, reference may be made to the above definition of the training method for the disease feature recognition model, and details are not repeated here. The modules in the disease feature recognition model training device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a disease feature recognition apparatus is provided, and the disease feature recognition apparatus corresponds to the disease feature recognition method in the above embodiments one to one. As shown in fig. 6, the disease feature recognition apparatus includes an image to be recognized acquisition module 70, a disease feature recognition module 80, and a recognition result determination module 90. The functional modules are explained in detail as follows:
an image to be recognized acquisition module 70, configured to acquire an image to be recognized;
a disease feature identification module 80, configured to input the image to be identified into a disease feature identification model, so as to perform disease feature identification on the image to be identified through the disease feature identification model, and obtain a global disease classification result, a local disease classification result, and a supervised disease classification result corresponding to the image to be identified; the disease feature recognition model is obtained according to the disease feature recognition model training method;
and the recognition result determining module 90 is configured to determine a disease feature recognition result corresponding to the image to be recognized according to the global disease classification result, the local disease classification result, and the supervised disease classification result.
For the specific definition of the disease feature recognition device, reference may be made to the above definition of the disease feature recognition method, which is not described herein again. The modules in the disease characteristic identification device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data used by the disease feature recognition model training method or the disease feature recognition method in the above embodiments. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a disease feature recognition model training method, or the computer program is executed by a processor to implement a disease feature recognition method.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the disease feature recognition model training method in the above embodiments when executing the computer program, or implements the disease feature recognition method in the above embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the disease feature recognition model training method in the above-described embodiments, or which when executed by a processor implements the disease feature recognition method in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A disease feature recognition model training method is characterized by comprising the following steps:
acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; associating a target disease feature label with one of the sample face images;
inputting the sample face image into a preset identification model containing initial parameters, and carrying out disease feature identification on the sample face image through the preset identification model to obtain a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
determining a first prediction loss value of the preset identification model according to the prediction global feature label and the target disease feature label; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label;
acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
determining a total loss value of the preset identification model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value and the third prediction weight;
and when the total loss value does not reach a preset convergence condition, iteratively updating the initial parameters in the preset identification model until the total loss value reaches the convergence condition, and recording the preset identification model after convergence as a disease feature identification model.
2. The method for training the disease feature recognition model according to claim 1, wherein the inputting the sample face image into a preset recognition model containing initial parameters to perform disease feature recognition on the sample face image through the preset recognition model, so as to obtain a predicted global feature tag, a predicted local feature tag and a predicted supervised feature tag corresponding to the sample face image, comprises:
carrying out convolution processing on the sample face image through a global convolution network of the preset identification model to obtain an intermediate convolution characteristic and the global characteristic label;
inputting the intermediate convolution features into a local convolution network in the preset identification model to obtain local convolution features corresponding to the intermediate convolution features and the predicted local feature labels;
and inputting the local convolution characteristics into a segmented pooling network in the preset identification model to obtain the prediction supervision characteristic label.
3. The method for training a disease feature recognition model according to claim 2, wherein the convolving the sample face image by the global convolutional network of the preset recognition model to obtain an intermediate convolutional feature and the global feature tag comprises:
carrying out convolution processing on the sample face image through the global convolution network to obtain the intermediate convolution characteristics output by an intermediate convolution layer of the global convolution network;
inputting the intermediate convolution characteristics to an output convolution layer of the global convolution network to obtain global convolution characteristics output by the output convolution layer;
and inputting the global convolution characteristics to a global full-connection layer of the global convolution network to obtain the global characteristic label.
4. The method for training the disease feature recognition model according to claim 2, wherein the inputting the intermediate convolution features into the local convolution network in the preset recognition model to obtain the local convolution features corresponding to the intermediate convolution features and the predicted local feature labels comprises:
performing local feature extraction on the intermediate convolution features through local convolution layers in the local convolution network to obtain local convolution features;
and inputting the local convolution characteristics to a local full-connection layer of the local convolution network to obtain the predicted local characteristic label.
5. The method for training the disease feature recognition model according to claim 2, wherein the inputting the local convolution features into the segmented pooling network in the preset recognition model to obtain the predictive supervised feature tag comprises:
carrying out average pooling on the local convolution characteristics through a segmented pooling layer in the segmented pooling network to obtain at least one pooling characteristic;
and inputting each pooling feature into a supervision full-connection layer in the segmented pooling network to obtain the prediction supervision feature label.
6. A method for identifying characteristics of a disease, comprising:
acquiring an image to be identified;
inputting the image to be recognized into a disease feature recognition model, and performing disease feature recognition on the image to be recognized through the disease feature recognition model to obtain a global disease classification result, a local disease classification result and a supervised disease classification result corresponding to the image to be recognized; the disease feature recognition model is obtained according to the disease feature recognition model training method of any one of claims 1 to 5;
and determining a disease feature identification result corresponding to the image to be identified according to the global disease classification result, the local disease classification result and the supervised disease classification result.
7. A disease feature recognition model training device, comprising:
the system comprises a sample face image acquisition module, a face image acquisition module and a face image acquisition module, wherein the sample face image acquisition module is used for acquiring a preset sample face data set; the preset sample face data set comprises at least one sample face image; associating a target disease feature label with one of the sample face images;
the disease feature prediction module is used for inputting the sample face image into a preset recognition model containing initial parameters so as to perform disease feature recognition on the sample face image through the preset recognition model to obtain a predicted global feature tag, a predicted local feature tag and a predicted supervision feature tag corresponding to the sample face image;
a loss value determination module, configured to determine a first predicted loss value of the preset identification model according to the predicted global feature tag and the target disease feature tag; determining a second prediction loss value of the preset identification model according to the predicted local feature tag and the target disease feature tag; determining a third prediction loss value of the preset recognition model according to the prediction supervision characteristic label and the target disease characteristic label;
the prediction weight acquisition module is used for acquiring a first prediction weight corresponding to the prediction global feature tag, a second prediction weight corresponding to the prediction local feature tag and a third prediction weight corresponding to the prediction supervision feature tag;
a total loss value obtaining module, configured to determine a total loss value of the preset identification model according to the first prediction loss value, the first prediction weight, the second prediction loss value, the second prediction weight, the third prediction loss value, and the third prediction weight;
and the recognition model training module is used for iteratively updating the initial parameters in the preset recognition model when the total loss value does not reach a preset convergence condition, and recording the preset recognition model after convergence as a disease feature recognition model until the total loss value reaches the convergence condition.
8. A disease feature identification device, comprising:
the image to be recognized acquisition module is used for acquiring an image to be recognized;
the disease feature identification module is used for inputting the image to be identified into a disease feature identification model so as to perform disease feature identification on the image to be identified through the disease feature identification model, and obtain a global disease classification result, a local disease classification result and a supervision disease classification result corresponding to the image to be identified; the disease feature recognition model is obtained according to the disease feature recognition model training method of any one of claims 1 to 5;
and the recognition result determining module is used for determining a disease feature recognition result corresponding to the image to be recognized according to the global disease classification result, the local disease classification result and the supervised disease classification result.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the disease feature recognition model training method according to any one of claims 1 to 5 when executing the computer program, or the processor implements the disease feature recognition method according to claim 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a method for training a disease feature recognition model according to any one of claims 1 to 5, and which, when being executed by a processor, implements a method for recognizing a disease feature according to claim 6.
CN202111003735.6A 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment Active CN113705685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111003735.6A CN113705685B (en) 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111003735.6A CN113705685B (en) 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment

Publications (2)

Publication Number Publication Date
CN113705685A true CN113705685A (en) 2021-11-26
CN113705685B CN113705685B (en) 2023-08-01

Family

ID=78656727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111003735.6A Active CN113705685B (en) 2021-08-30 2021-08-30 Disease feature recognition model training, disease feature recognition method, device and equipment

Country Status (1)

Country Link
CN (1) CN113705685B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360007A (en) * 2021-12-22 2022-04-15 浙江大华技术股份有限公司 Face recognition model training method, face recognition device, face recognition equipment and medium
CN115878808A (en) * 2023-03-03 2023-03-31 有米科技股份有限公司 Training method and device for hierarchical label classification model
CN116703837A (en) * 2023-05-24 2023-09-05 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349147A (en) * 2019-07-11 2019-10-18 腾讯医疗健康(深圳)有限公司 Training method, the lesion recognition methods of fundus flavimaculatus area, device and the equipment of model
CN111368672A (en) * 2020-02-26 2020-07-03 苏州超云生命智能产业研究院有限公司 Construction method and device for genetic disease facial recognition model
CN111582342A (en) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 Image identification method, device, equipment and readable storage medium
CN111598867A (en) * 2020-05-14 2020-08-28 国家卫生健康委科学技术研究所 Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
WO2021012526A1 (en) * 2019-07-22 2021-01-28 平安科技(深圳)有限公司 Face recognition model training method, face recognition method and apparatus, device, and storage medium
WO2021120752A1 (en) * 2020-07-28 2021-06-24 平安科技(深圳)有限公司 Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349147A (en) * 2019-07-11 2019-10-18 腾讯医疗健康(深圳)有限公司 Training method, the lesion recognition methods of fundus flavimaculatus area, device and the equipment of model
WO2021012526A1 (en) * 2019-07-22 2021-01-28 平安科技(深圳)有限公司 Face recognition model training method, face recognition method and apparatus, device, and storage medium
CN111368672A (en) * 2020-02-26 2020-07-03 苏州超云生命智能产业研究院有限公司 Construction method and device for genetic disease facial recognition model
CN111582342A (en) * 2020-04-29 2020-08-25 腾讯科技(深圳)有限公司 Image identification method, device, equipment and readable storage medium
CN111598867A (en) * 2020-05-14 2020-08-28 国家卫生健康委科学技术研究所 Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
WO2021120752A1 (en) * 2020-07-28 2021-06-24 平安科技(深圳)有限公司 Region-based self-adaptive model training method and device, image detection method and device, and apparatus and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360007A (en) * 2021-12-22 2022-04-15 浙江大华技术股份有限公司 Face recognition model training method, face recognition device, face recognition equipment and medium
CN114360007B (en) * 2021-12-22 2023-02-07 浙江大华技术股份有限公司 Face recognition model training method, face recognition device, face recognition equipment and medium
CN115878808A (en) * 2023-03-03 2023-03-31 有米科技股份有限公司 Training method and device for hierarchical label classification model
CN116703837A (en) * 2023-05-24 2023-09-05 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device
CN116703837B (en) * 2023-05-24 2024-02-06 北京大学第三医院(北京大学第三临床医学院) MRI image-based rotator cuff injury intelligent identification method and device

Also Published As

Publication number Publication date
CN113705685B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN109241903B (en) Sample data cleaning method, device, computer equipment and storage medium
CN110599451B (en) Medical image focus detection and positioning method, device, equipment and storage medium
CN110136103B (en) Medical image interpretation method, device, computer equipment and storage medium
CN111667011A (en) Damage detection model training method, damage detection model training device, damage detection method, damage detection device, damage detection equipment and damage detection medium
CN109063742B (en) Butterfly identification network construction method and device, computer equipment and storage medium
CN113705685A (en) Disease feature recognition model training method, disease feature recognition device and disease feature recognition equipment
CN111950329A (en) Target detection and model training method and device, computer equipment and storage medium
CN111931865B (en) Training method and device of image classification model, computer equipment and storage medium
CN109086711B (en) Face feature analysis method and device, computer equipment and storage medium
CN113239874B (en) Behavior gesture detection method, device, equipment and medium based on video image
CN109472213B (en) Palm print recognition method and device, computer equipment and storage medium
CN111832581B (en) Lung feature recognition method and device, computer equipment and storage medium
CN110197107B (en) Micro-expression recognition method, micro-expression recognition device, computer equipment and storage medium
CN110807491A (en) License plate image definition model training method, definition detection method and device
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN110046577B (en) Pedestrian attribute prediction method, device, computer equipment and storage medium
CN112016318A (en) Triage information recommendation method, device, equipment and medium based on interpretation model
CN114359787A (en) Target attribute identification method and device, computer equipment and storage medium
CN116863522A (en) Acne grading method, device, equipment and medium
CN112132278A (en) Model compression method and device, computer equipment and storage medium
CN111523479A (en) Biological feature recognition method and device for animal, computer equipment and storage medium
CN113192175A (en) Model training method and device, computer equipment and readable storage medium
CN110163151B (en) Training method and device of face model, computer equipment and storage medium
CN113283368A (en) Model training method, face attribute analysis method, device and medium
CN111985340A (en) Face recognition method and device based on neural network model and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant