CN117854709A - Diabetes six-typing method and system based on depth contrast clustering - Google Patents

Diabetes six-typing method and system based on depth contrast clustering Download PDF

Info

Publication number
CN117854709A
CN117854709A CN202410044406.3A CN202410044406A CN117854709A CN 117854709 A CN117854709 A CN 117854709A CN 202410044406 A CN202410044406 A CN 202410044406A CN 117854709 A CN117854709 A CN 117854709A
Authority
CN
China
Prior art keywords
similarity
value
sample data
minimum
diabetes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410044406.3A
Other languages
Chinese (zh)
Other versions
CN117854709B (en
Inventor
贾统
王伟好
郭立新
肖佩
潘琦
李影
马燕华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hospital
Original Assignee
Beijing Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hospital filed Critical Beijing Hospital
Priority to CN202410044406.3A priority Critical patent/CN117854709B/en
Publication of CN117854709A publication Critical patent/CN117854709A/en
Application granted granted Critical
Publication of CN117854709B publication Critical patent/CN117854709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Pathology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a diabetes six-typing method and system based on depth contrast clustering. The six-typing method for diabetes comprises the following steps: constructing a sample data set of senile diabetes by using the medical record data; the method comprises the steps of performing self-supervision training on a deep learning network model by obtaining a maximum similarity value and a minimum similarity value between every two data of sample data through the deep learning network model of a binary cluster network structure based on a twin network, and obtaining a network model with a self-supervision function and used for six-typing diabetes; inputting patient information acquired in real time into the trained network model for the six types of diabetes mellitus, and determining the type of diabetes mellitus of the current patient by utilizing the trained network model for the six types of diabetes mellitus. The system comprises modules corresponding to the method steps.

Description

Diabetes six-typing method and system based on depth contrast clustering
Technical Field
The invention provides a diabetes six-typing method and system based on depth contrast clustering, and belongs to the technical field of diabetes typing model construction.
Background
Diabetes is one of the most common chronic metabolic diseases, and the increasing incidence of diabetes in the elderly year by year cannot only pay attention to blood glucose control, but rather to the prevention of complications and early discovery of treatment. The machine learning method has wide application prospect in accurate medical tasks of diabetes due to the good processing capacity of the machine learning method on data. Compared with the traditional method, the machine learning method can analyze based on more data characteristics, and reduces the interference of human factors, so that the accuracy and stability of typing are improved, however, the problem of lower accuracy caused by larger error accumulation rate of the six-parting model clustering of diabetes in the prior art exists.
Disclosure of Invention
The invention provides a depth contrast clustering-based diabetes six-parting method and a depth contrast clustering-based diabetes six-parting system, which are used for solving the problem of lower accuracy caused by larger error accumulation rate of diabetes six-parting model clustering in the prior art, and the adopted technical scheme is as follows:
a depth contrast clustering-based diabetes hexatyping method, the diabetes hexatyping method comprising:
constructing a sample data set of senile diabetes by using the medical record data;
the method comprises the steps of performing self-supervision training on a deep learning network model by obtaining a maximum similarity value and a minimum similarity value between every two data of sample data through the deep learning network model of a binary cluster network structure based on a twin network, and obtaining a network model with a self-supervision function and used for six-typing diabetes;
Inputting patient information acquired in real time into the trained network model for the six types of diabetes mellitus, and determining the type of diabetes mellitus of the current patient by utilizing the trained network model for the six types of diabetes mellitus.
Further, constructing a sample dataset of senile diabetes using medical record data, comprising:
the senile diabetes information is called from the medical record data;
according to the feature selection requirement, invoking senile diabetes information with the feature selection requirement from the senile diabetes information as target data;
preprocessing the target data to obtain preprocessed target data;
integrating the preprocessed target data into a sample data set.
Further, the deep learning network model comprises a neural network unit and a twin network unit corresponding to the neural network unit; the network structures of the neural network unit and the twin network unit are the same, the weight sharing is carried out between the neural network unit and the twin network unit, and meanwhile, the neural network unit and the twin network unit both comprise a neural network body and a clustering constraint layer.
Further, the deep learning network model is self-supervised and trained in a mode of obtaining a maximum similarity value and a minimum similarity value between every two data of sample data through the deep learning network model based on the two-class clustering network structure of the twin network, and the network model with a self-supervision function and used for six-class diabetes is obtained, and the method comprises the following steps:
Obtaining a maximum similarity value and a minimum similarity value between every two data of the sample data in the sample data set through a twin network unit of the deep learning network model; the deep learning network model adopts a binary clustering network structure based on a twin network;
training the neural network unit corresponding to the twin network unit by utilizing the maximum similarity value and the minimum similarity value to obtain a trained neural network unit; the trained neural network unit is a network model for six types of diabetes.
Further, obtaining, by the twin network unit of the deep learning network model, a maximum similarity value and a minimum similarity value between every two data of the sample data in the sample data set, including:
step A1, inputting a sample data set to a twin network unit;
a2, the twin network unit calculates and obtains the similarity between every two sample data of each sample data of the sample data set, and obtains a similarity value set which corresponds to the sample data set and contains a plurality of similarity values;
a3, extracting a similarity maximum value and a similarity minimum value from the similarity value set;
Step A4, marking the maximum value and the minimum value of the similarity with similar labels;
step A5, inputting a sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity labels to a neural network unit;
step A6, after the sample data set corresponding to the similarity maximum value and the similarity minimum value with the similarity labels is input to a neural network unit, selecting the similarity maximum value and the similarity minimum value again from the rest similarity values of the similarity value set, and obtaining the similarity maximum value and the similarity minimum value in the similarity value set containing the rest similarity values;
a7, marking a similarity label on the maximum similarity value and the minimum similarity value in the similarity value set containing the rest similarity values;
step A8, inputting a maximum similarity value and a minimum similarity value in the similarity value set containing the rest similarity values with the similarity labels into a corresponding sample data set to a neural network unit;
and A9, repeating the steps A6 to A8 until all similarity values in the similarity value set are labeled with similarity labels, and inputting a sample data set corresponding to the maximum similarity value and the minimum similarity value into a neural network.
Further, training the neural network unit corresponding to the twin network unit by using the maximum similarity value and the minimum similarity value to obtain a trained neural network unit, including:
step B1, the neural network unit receives a group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels;
step B2, training the neural network once by using the sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity label;
step B3, after the neural network training is completed once, monitoring whether a next group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels are received in real time;
step B4, when a next group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels are received, performing neural network training once by using the sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels;
and B5, repeating the steps B1 to B4 until the training of the neural network is completed, and obtaining the trained neural network unit.
Further, the decoder portion of the DAE in the network model for performing diabetes hexatyping adds a data reconstruction loss, and an objective function corresponding to the data reconstruction loss is as follows:
Wherein,representing a two-input sample +.>And->Feature vector similarity (using cosine similarity) based on twin network output; />Then it is the true relationship label->And->Is a contrast loss of (2); />Refers to encoder, < >>Finger decoder (I/O)>Then super-parameters for adjusting the reconstruction loss from the encoder; />As a penalty, by decreasing the penalty, progressively more samples are selected for training until all samples are used for training.
Further, the obtaining, by the twin network unit of the deep learning network model, a maximum similarity value and a minimum similarity value between every two data of the sample data in the sample data set further includes:
when a similarity value set which corresponds to the sample data set and contains a plurality of similarity values is obtained, extracting the plurality of similarity values, and setting a first similarity threshold according to the similarity values; the first similarity threshold is obtained through the following formula:
wherein,S 01 representing a first similarity threshold;S 0 representing a preset initial similarity threshold;nrepresenting the number of similarity values contained in the set of similarity values;S i representing the first in the set of similarity valuesiA similarity value; λ 1 Representing a first coefficient;erepresenting a constant;S p representing the average value of the similarity values corresponding to the similarity value set;
setting a second similarity threshold using the plurality of similarity values and the first similarity threshold; wherein the second similarity threshold is obtained by the following formula:
wherein,S 02 representing a second similarity threshold;S 0 representing a preset initial similarity threshold;nrepresenting the number of similarity values contained in the set of similarity values;S i representing the first in the set of similarity valuesiA similarity value;λ 1 representing a first coefficient;λ 2 representing a second coefficient;λ 3 representing a third coefficient;erepresenting a constant;S p representing the average value of the similarity values corresponding to the similarity value set;X 1 andX 2 respectively representing a first parameter and a second parameter;
setting a difference threshold by using the first similarity threshold and the second similarity threshold;
monitoring a selected similarity maximum value and a selected similarity minimum value in the rest similarity values in the similarity value set in real time;
comparing the maximum value of the similarity with the minimum value of the similarity to obtain a difference value between the maximum value of the similarity and the minimum value of the similarity;
when the difference value between the maximum similarity value and the minimum similarity value is lower than a difference threshold value, sample data adjustment is carried out on a sample data set corresponding to the maximum similarity value and the minimum similarity value, wherein the difference value is lower than the difference threshold value;
And in the process of sample data adjustment of the sample data set corresponding to the similarity maximum value and the similarity minimum value, monitoring the change of the similarity maximum value and the similarity minimum value corresponding to the sample data set corresponding to the similarity maximum value and the similarity minimum value in real time until the difference value of the similarity maximum value and the similarity minimum value corresponding to the sample data set corresponding to the similarity maximum value and the similarity minimum value is lower than the difference value between the similarity maximum value and the similarity minimum value corresponding to the sample data set corresponding to the difference threshold after adjustment, and the difference value between the similarity maximum value and the similarity minimum value is not lower than the difference threshold, wherein the difference value between the similarity maximum value and the similarity minimum value is different from the difference value between the similarity maximum value and the similarity minimum value obtained in history.
Further, setting a difference threshold using the first similarity threshold and the second similarity threshold includes:
extracting the first similarity threshold;
sequentially comparing the similarity value in the similarity value set with the first similarity threshold value to obtain a first difference value set;
Extracting the second similarity threshold;
sequentially comparing the similarity value in the similarity value set with the second similarity threshold value to obtain a second difference value set;
and setting a difference threshold by using the difference value parameters in the first difference value set and the second difference value set.
Wherein the difference threshold is obtained by the following formula:
wherein,S cy representing a variance threshold;mrepresents the number of differences contained in the first and second sets of differences, respectively, and,m=nS c i01 representing the first in the set of similarity valuesiDifferences in the first difference set corresponding to the similarity degree values;S c i02 representing the first in the set of similarity valuesiDifferences in the second difference set corresponding to the similarity values;S i representing the first in the set of similarity valuesiA similarity value;S 01 representing a first similarity threshold;S 02 representing a second similarity threshold;λ 4 representing the fourth coefficient.
A depth contrast clustering-based diabetes hexatyping system, the diabetes hexatyping system comprising:
the data set construction module is used for constructing a sample data set of senile diabetes by using medical record data;
the self-supervision training module is used for carrying out self-supervision training on the deep learning network model in a mode of acquiring a maximum similarity value and a minimum similarity value between every two data of the sample data through the deep learning network model based on the binary clustering network structure of the twin network, so as to obtain a network model with a self-supervision function and used for six-typing of diabetes;
And the typing execution module is used for inputting the patient information acquired in real time into the trained network model for six types of diabetes mellitus, and determining the diabetes mellitus type of the current patient by utilizing the trained network model for six types of diabetes mellitus.
The invention has the beneficial effects that:
the diabetes six-typing method and system based on depth contrast clustering provided by the invention are used for carrying out cluster analysis typing on senile diabetes patients (Age is more than or equal to 65) and have extremely high verification stability; meanwhile, the method can work in an end-to-end mode, error accumulation is avoided, and the inaccuracy of the supervision signals is reduced to the greatest extent by utilizing a high-reliability labeling mode, so that the accuracy of the supervision signals and clustering is effectively improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic structural diagram of a binary cluster network structure based on a twin network according to the present invention;
FIG. 3 is a schematic diagram of a twin network according to the present invention;
fig. 4 is a system block diagram of the system of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The embodiment of the invention provides a diabetes six-typing method based on depth contrast clustering, which is shown in figure 1 and comprises the following steps:
s1, constructing a sample data set of senile diabetes by using medical record data;
s2, performing self-supervision training on the deep learning network model in a mode of acquiring a maximum similarity value and a minimum similarity value between every two data of sample data through the deep learning network model of the binary classification clustering network structure based on the twin network, and obtaining a network model with a self-supervision function and used for six types of diabetes; as shown in fig. 2 and fig. 3, the deep learning network model includes a neural network unit and a twin network unit corresponding to the neural network unit; the network structures of the neural network unit and the twin network unit are the same, the weight sharing is carried out between the neural network unit and the twin network unit, and meanwhile, the neural network unit and the twin network unit both comprise a neural network body and a clustering constraint layer. In order to make the model work end-to-end, a clustering constraint layer is added, so that the network outputs a k-dimensional (k is the number of clusters) clustering indication vector, and the clustering indication feature is close to one-hot in an ideal situation, so that the class of the input sample can be obtained by obtaining the maximum response of the vector. The method realizes that the class of the patient is directly obtained according to the output of the neural network, greatly simplifies the clustering process and avoids the accumulation of errors. Wherein the nonlinear activation function and the normalized activation function (Relu & softmax) in fig. 2 and 3 are activation functions applied in the cluster constraint layer.
S3, inputting the patient information acquired in real time into the trained network model for six-typing diabetes mellitus, and determining the diabetes mellitus type of the current patient by utilizing the trained network model for six-typing diabetes mellitus.
Wherein, utilize case history data to construct the sample dataset of senile diabetes, include:
s101, calling senile diabetes information from medical record data;
s102, according to the characteristic selection requirement, invoking senile diabetes information with the characteristic selection requirement from the senile diabetes information as target data;
s103, preprocessing the target data to obtain preprocessed target data;
s104, integrating the preprocessed target data into a sample data set.
The working principle of the technical scheme is as follows: construction of a sample dataset for senile diabetes (S1): first, information related to senile diabetes is extracted from medical record data, which is to be used for constructing a training dataset of a model.
Self-supervision training deep learning network model (S2): a deep learning network model is used, which comprises a neural network unit and a twin network unit, which have the same network structure.
Weights are shared between the neural network element and the twin network element, which helps learn the representation of the data and the similarity measure.
The self-supervision training method is utilized, and the network model learns the maximum similarity value and the minimum similarity value between every two data in the sample data so as to cluster the data under the condition of no supervision, and particularly, the senile diabetes is classified into six different types.
This model also includes neural network ontologies and cluster constraint layers that facilitate learning and data clustering of the model.
Identification of diabetes type (S3): in practice, patient information is collected and entered into a trained model. The model determines the current patient's diabetes type by learned similarity metrics and cluster information.
The technical scheme has the effects that: the diabetes six-typing method and the diabetes six-typing system based on depth contrast clustering provided by the embodiment are used for carrying out cluster analysis typing on senile diabetes patients (Age is more than or equal to 65) and have extremely high verification stability; meanwhile, the method can work in an end-to-end mode, error accumulation is avoided, and the inaccuracy of the supervision signals is reduced to the greatest extent by utilizing a high-reliability labeling mode, so that the accuracy of the supervision signals and clustering is effectively improved. Meanwhile, the technical effects of the technical scheme further comprise:
Self-supervision study: by self-supervised training, the model does not require manual labeling, but rather learns how to cluster senile diabetes data by the data itself.
Six types of diabetes: the goal of this model is to divide senile diabetes into six different types, which helps to better understand and handle the differences between different cases to achieve more refined treatment and management.
Real-time patient diabetes type identification: once the model training is complete, it can be used to identify the patient's diabetes type in real time, helping healthcare professionals to better provide personalized treatments and advice to the patient.
In general, the technical scheme utilizes a deep learning and self-supervision learning method to construct a model for senile diabetes classification, and provides an automatic and fine tool for medical care.
In one embodiment of the present invention, a deep learning network model based on a binary cluster network structure of a twin network is used to perform self-supervision training on the deep learning network model in a manner of obtaining a maximum similarity value and a minimum similarity value between every two data of sample data, to obtain a network model for six types of diabetes with a self-supervision function, including:
S201, obtaining a maximum similarity value and a minimum similarity value between every two data of sample data in the sample data set through a twin network unit of a deep learning network model; the deep learning network model adopts a binary clustering network structure based on a twin network;
s202, training the neural network unit corresponding to the twin network unit by utilizing the maximum similarity value and the minimum similarity value to obtain a trained neural network unit; the trained neural network unit is a network model for six types of diabetes.
The specific training mode is as follows: training of the twin network requires that the similarity labels r_ij between the two samples are known and not present in the data. Aiming at the problem, the framework adopts a self-step labeling method to label gradually, and the final neural network is obtained through training gradually in a self-supervision mode.
The method for obtaining the maximum similarity value and the minimum similarity value between every two data of the sample data in the sample data set through the twin network unit of the deep learning network model comprises the following steps:
step A1, inputting a sample data set to a twin network unit;
a2, the twin network unit calculates and obtains the similarity between every two sample data of each sample data of the sample data set, and obtains a similarity value set which corresponds to the sample data set and contains a plurality of similarity values;
A3, extracting a similarity maximum value and a similarity minimum value from the similarity value set;
step A4, marking the maximum value and the minimum value of the similarity with similar labels;
step A5, inputting a sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity labels to a neural network unit;
step A6, after the sample data set corresponding to the similarity maximum value and the similarity minimum value with the similarity labels is input to a neural network unit, selecting the similarity maximum value and the similarity minimum value again from the rest similarity values of the similarity value set, and obtaining the similarity maximum value and the similarity minimum value in the similarity value set containing the rest similarity values;
a7, marking a similarity label on the maximum similarity value and the minimum similarity value in the similarity value set containing the rest similarity values;
step A8, inputting a maximum similarity value and a minimum similarity value in the similarity value set containing the rest similarity values with the similarity labels into a corresponding sample data set to a neural network unit;
and A9, repeating the steps A6 to A8 until all similarity values in the similarity value set are labeled with similarity labels, and inputting a sample data set corresponding to the maximum similarity value and the minimum similarity value into a neural network.
Training the neural network unit corresponding to the twin network unit by using the maximum similarity value and the minimum similarity value to obtain a trained neural network unit, wherein the training comprises the following steps:
step B1, the neural network unit receives a group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels;
step B2, training the neural network once by using the sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity label;
step B3, after the neural network training is completed once, monitoring whether a next group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels are received in real time;
step B4, when a next group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels are received, performing neural network training once by using the sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity labels;
and B5, repeating the steps B1 to B4 until the training of the neural network is completed, and obtaining the trained neural network unit.
Meanwhile, the decoder part of the DAE in the network model for performing diabetes hexatyping adds data reconstruction loss, and an objective function corresponding to the data reconstruction loss is as follows:
Wherein,representing a two-input sample +.>And->Feature vector similarity (using cosine similarity) based on twin network output; />Then it is the true relationship label->And->Is a contrast loss of (2); />Refers to encoder, < >>Finger decoder (I/O)>Then super-parameters for adjusting the reconstruction loss from the encoder; />As a penalty, by decreasing the penalty, progressively more samples are selected for training until all samples are used for training.
The working principle of the technical scheme is as follows: inputting unlabeled samples, calculating to obtain the similarity of every two samples through a twin network, labeling the most similar and least similar samples each time, and training the current network. The training continues by selecting the similar and least similar samples of the remaining samples until all samples are selected and the training is completed. Each time a sample pair with high confidence is selected to train the current network, the inaccuracy of the supervision signals is effectively relieved through high-reliability self-supervision learning. Meanwhile, a solution for fading easily occurs in the self-supervision training process. I.e. the feature distribution of the original space of the data samples is not considered in the network optimization, but the data samples are simply mapped to a cluster which reduces the loss, not the desired cluster. In order to further improve the reliability of the model, a depth self-encoder is pre-trained to initialize network parameters to ensure high confidence in the similarity calculation results at the beginning of the model. In order to solve the above problem, the decoder portion of the DAE is utilized to add data reconstruction loss, so that the original distribution of the data can be maintained to the maximum extent after the feature extraction, and the final objective function is the objective function in this embodiment.
The technical scheme has the effects that: the above technical solution of the embodiment adopts a self-supervision learning method, and no external tag data is needed. It uses the similarity maximum and similarity minimum to automatically learn the representation of the data and the similarity measure. Through step a, the model calculates the similarity between the data in the sample data set by using the twin network unit, and then divides the data into similar and dissimilar groups. This helps train the model to capture the inherent similarities and differences of the data. In step B, the trained neural network unit receives the sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity label, and then performs the neural network training. This allows the neural network to learn how to distinguish between similar and dissimilar data points.
The above technical solution of the present embodiment allows the model to continuously receive new pairs of similarity data, monitor in real time, and then perform iterative training. This helps to continuously improve the performance of the model and adapt to new data. The model generates the similarity label through the automatic label, so that the label is not required to be manually distributed for the data, and the labor cost is reduced.
In summary, the technical effect of the above technical solution of the present embodiment is that the self-supervised learning and twin network structure is used, so that the similarity of data can be effectively learned, and the supervised training labels can be provided for the neural network. This helps to improve the performance of the model, enabling it to better handle both similarity and difference data, and to improve continuously as new data comes in.
In one embodiment of the present invention, the obtaining, by the twin network unit of the deep learning network model, a maximum similarity value and a minimum similarity value between two pairs of sample data in the sample data set further includes:
step 1, when a similarity value set which corresponds to the sample data set and contains a plurality of similarity values is obtained, extracting the plurality of similarity values, and setting a first similarity threshold according to the similarity values; the first similarity threshold is obtained through the following formula:
wherein,S 01 representing a first similarity threshold;S 0 representing a preset initial similarity threshold;nrepresenting the number of similarity values contained in the set of similarity values;S i representing the first in the set of similarity valuesiA similarity value;λ 1 representing a first coefficient;erepresenting a constant;S p representing the average value of the similarity values corresponding to the similarity value set;
step 2, setting a second similarity threshold by using the plurality of similarity values and the first similarity threshold; wherein the second similarity threshold is obtained by the following formula:
wherein,S 02 representing a second similarity threshold;S 0 representing a preset initial similarity threshold; nRepresenting the number of similarity values contained in the set of similarity values;S i representing the first in the set of similarity valuesiA similarity value;λ 1 representing a first coefficient;λ 2 representing a second coefficient;λ 3 representing a third coefficient;erepresenting a constant;S p representing the average value of the similarity values corresponding to the similarity value set;X 1 andX 2 respectively representing a first parameter and a second parameter;
step 3, setting a difference threshold by using the first similarity threshold and the second similarity threshold;
step 4, monitoring the selected maximum similarity value and the selected minimum similarity value in the rest similarity values in the similarity value set in real time;
step 5, comparing the maximum value of the similarity with the minimum value of the similarity to obtain a difference value between the maximum value of the similarity and the minimum value of the similarity;
step 6, when the difference value between the maximum value of the similarity and the minimum value of the similarity is lower than a difference threshold value, carrying out sample data adjustment on a sample data set corresponding to the maximum value of the similarity and the minimum value of the similarity, wherein the difference value is lower than the difference threshold value;
and 7, monitoring the change of the maximum value and the minimum value of the similarity corresponding to the sample data set corresponding to the maximum value and the minimum value of the similarity in real time in the process of sample data adjustment of the sample data set corresponding to the maximum value and the minimum value of the similarity, until the difference value of the maximum value and the minimum value of the similarity corresponding to the sample data set corresponding to the maximum value and the minimum value of the similarity is lower than the maximum value and the minimum value of the similarity of the difference threshold, and the difference value between the maximum value and the minimum value of the similarity is not lower than the difference threshold.
The working principle of the technical scheme is as follows: first similarity threshold setting (step 1): and calculating a first similarity threshold according to the similarity data in the similarity value set. This threshold is calculated by a certain formula, which includes the average value of the similarity data and some coefficients. This threshold will be used for the next step of similarity data screening.
Second similarity threshold setting (step 2): a second similarity threshold is calculated based on the first similarity threshold and the similarity data. The calculation of this threshold also includes the average value of the similarity data and some coefficients.
Difference threshold setting (step 3): based on the first two calculations, a variance threshold is set that will be used to determine if the similarity data needs to be adjusted.
Similarity data was monitored in real time (step 4): in real-time monitoring, selected similarity maximum and minimum values in the remaining similarity values are checked.
Difference value calculation (step 5): and calculating the difference value between the selected similarity maximum value and the selected similarity minimum value.
Similarity data adjustment (step 6): if the variance value is below the variance threshold, then the corresponding sample data set is adjusted.
Real-time monitoring (step 7): and in the adjustment process, monitoring the maximum value and the minimum value of the similarity in real time until the difference value is not lower than the difference threshold value and is different from the difference value of the historical data.
The technical scheme has the effects that: the above-mentioned technical solution of the present embodiment allows for automatic monitoring and adjustment of the similarity data to ensure that the difference between the maximum and minimum values of the similarity is within a certain range. By adjusting different thresholds (a first similarity threshold, a second similarity threshold and a difference threshold), the screening and adjustment process of the data can be adjusted according to the need, and the flexibility is improved. By means of real-time monitoring, the data can be adjusted according to dynamic changes of the data, and the data is not only subjected to one-time static processing. The above technical solution of the present embodiment can adaptively adjust the similarity data to adapt to different data distribution and variation. This helps to ensure performance stability of the model.
Meanwhile, the method improves the difference between the maximum similarity value and the minimum similarity value in real time, further effectively improves the difference between sample data, improves the model training efficiency through the increase of the difference between the sample data, and prevents the problem that the probability of abnormal value is increased in the model training process due to small difference. Meanwhile, the first similarity threshold and the second similarity threshold obtained through the formula can ensure the rationality of the difference threshold setting to the greatest extent, prevent the problem that the probability of occurrence of the model training abnormal value is larger due to the fact that the difference between the similarity maximum value and the similarity minimum value in sample data cannot be timely improved due to the fact that the difference threshold is too small, and simultaneously prevent the problem that the similarity between the sample data is too small due to the fact that the neural network cannot obtain effective training due to the fact that the difference of the sample data is larger, and further prevent the problem that effective recognition cannot be achieved on follow-up data with smaller similarity.
In one embodiment of the present invention, setting a difference threshold using the first similarity threshold and the second similarity threshold includes:
step 301, extracting the first similarity threshold;
step 302, sequentially comparing the similarity value in the similarity value set with the first similarity threshold value to obtain a first difference value set;
step 303, extracting the second similarity threshold;
step 304, sequentially comparing the similarity value in the similarity value set with the second similarity threshold value to obtain a second difference value set;
step 305, setting a difference threshold by using the difference value parameters in the first difference value set and the second difference value set.
Wherein the difference threshold is obtained by the following formula:
wherein,S cy representing a variance threshold;mrepresents the number of differences contained in the first and second sets of differences, respectively, and,m=nS c i01 representing the first in the set of similarity valuesiDifferences in the first difference set corresponding to the similarity degree values;S c i02 representing the first in the set of similarity valuesiDifferences in the second difference set corresponding to the similarity values;S i representing the first in the set of similarity valuesiA similarity value; S 01 Representing a first similarity threshold;S 02 representing a second similarity threshold;λ 4 representing the fourth coefficient.
The working principle of the technical scheme is as follows: extracting a first similarity threshold (step 301): first, a value is extracted from the previously calculated first similarity threshold.
Comparing the similarity value to a first similarity threshold (step 302): each similarity value in the set of similarity values is compared in turn with a first similarity threshold to obtain a first set of differences. This difference represents the difference between each similarity value and the first similarity threshold.
Extracting a second similarity threshold (step 303): next, a value is extracted from the second similarity threshold calculated previously.
Comparing the similarity value to a second similarity threshold (step 304): each similarity value in the set of similarity values is compared in turn with a second similarity threshold to obtain a second set of differences. This difference represents the difference between each similarity value and the second similarity threshold.
Setting a variance threshold using a variance value parameter (step 305): finally, a difference threshold is set according to the difference value parameters in the first difference value set and the second difference value set. This difference threshold will be used to decide if the difference between the maximum and minimum similarity values is small enough to trigger an adjustment of the data.
The technical scheme has the effects that: the above technical solution of the present embodiment compares the similarity value with two preset similarity thresholds, and then uses these difference parameters to set the difference threshold. This process automatically determines a variance threshold from the similarity data and a preset threshold parameter. The setting of the variance threshold is adaptive and can be adjusted according to the distribution and characteristics of the similarity values without requiring manual setting. The variance threshold allows fine control of the variance between the maximum and minimum values of similarity, ensuring that only small enough variance triggers data adjustment, thereby improving the stability and performance of the model.
Meanwhile, the method improves the difference between the maximum similarity value and the minimum similarity value in real time, further effectively improves the difference between sample data, improves the model training efficiency through the increase of the difference between the sample data, and prevents the problem that the probability of abnormal value is increased in the model training process due to small difference. Meanwhile, the first similarity threshold and the second similarity threshold obtained through the formula can ensure the rationality of the difference threshold setting to the greatest extent, prevent the problem that the probability of occurrence of the model training abnormal value is larger due to the fact that the difference between the similarity maximum value and the similarity minimum value in sample data cannot be timely improved due to the fact that the difference threshold is too small, and simultaneously prevent the problem that the similarity between the sample data is too small due to the fact that the neural network cannot obtain effective training due to the fact that the difference of the sample data is larger, and further prevent the problem that effective recognition cannot be achieved on follow-up data with smaller similarity.
The embodiment of the invention provides a diabetes six-typing system based on depth contrast clustering, as shown in fig. 2, comprising:
the data set construction module is used for constructing a sample data set of senile diabetes by using medical record data;
the self-supervision training module is used for carrying out self-supervision training on the deep learning network model in a mode of acquiring a maximum similarity value and a minimum similarity value between every two data of the sample data through the deep learning network model based on the binary clustering network structure of the twin network, so as to obtain a network model with a self-supervision function and used for six-typing of diabetes;
and the typing execution module is used for inputting the patient information acquired in real time into the trained network model for six types of diabetes mellitus, and determining the diabetes mellitus type of the current patient by utilizing the trained network model for six types of diabetes mellitus.
The working principle of the technical scheme is as follows: first, information related to senile diabetes is extracted from medical record data, which is to be used for constructing a training dataset of a model.
Then, self-supervising training deep learning network model: in particular, a deep learning network model is used, which includes a neural network element and a twin network element, which have the same network structure.
Weights are shared between the neural network element and the twin network element, which helps learn the representation of the data and the similarity measure.
The self-supervision training method is utilized, and the network model learns the maximum similarity value and the minimum similarity value between every two data in the sample data so as to cluster the data under the condition of no supervision, and particularly, the senile diabetes is classified into six different types.
This model also includes neural network ontologies and cluster constraint layers that facilitate learning and data clustering of the model.
Finally, in practice, patient information is collected and entered into a trained model. The model determines the current patient's diabetes type by learned similarity metrics and cluster information.
The technical scheme has the effects that: the diabetes six-typing method and the diabetes six-typing system based on depth contrast clustering provided by the embodiment are used for carrying out cluster analysis typing on senile diabetes patients (Age is more than or equal to 65) and have extremely high verification stability; meanwhile, the method can work in an end-to-end mode, error accumulation is avoided, and the inaccuracy of the supervision signals is reduced to the greatest extent by utilizing a high-reliability labeling mode, so that the accuracy of the supervision signals and clustering is effectively improved. Meanwhile, the technical effects of the technical scheme further comprise:
Self-supervision study: by self-supervised training, the model does not require manual labeling, but rather learns how to cluster senile diabetes data by the data itself.
Six types of diabetes: the goal of this model is to divide senile diabetes into six different types, which helps to better understand and handle the differences between different cases to achieve more refined treatment and management.
Real-time patient diabetes type identification: once the model training is complete, it can be used to identify the patient's diabetes type in real time, helping healthcare professionals to better provide personalized treatments and advice to the patient.
In general, the technical scheme utilizes a deep learning and self-supervision learning method to construct a model for senile diabetes classification, and provides an automatic and fine tool for medical care.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The diabetes hexatyping method based on depth contrast clustering is characterized by comprising the following steps of:
constructing a sample data set of senile diabetes by using the medical record data;
the method comprises the steps of performing self-supervision training on a deep learning network model by obtaining a maximum similarity value and a minimum similarity value between every two data of sample data through the deep learning network model of a binary cluster network structure based on a twin network, and obtaining a network model with a self-supervision function and used for six-typing diabetes;
inputting patient information acquired in real time into the trained network model for the six types of diabetes mellitus, and determining the type of diabetes mellitus of the current patient by utilizing the trained network model for the six types of diabetes mellitus.
2. The diabetes hexatyping method of claim 1, wherein constructing a sample dataset of senile diabetes using medical record data comprises:
the senile diabetes information is called from the medical record data;
according to the feature selection requirement, invoking senile diabetes information with the feature selection requirement from the senile diabetes information as target data;
preprocessing the target data to obtain preprocessed target data;
Integrating the preprocessed target data into a sample data set.
3. The diabetes hexatyping method of claim 1, wherein the deep learning network model comprises a neural network element and a twin network element corresponding to the neural network element; the network structures of the neural network unit and the twin network unit are the same, the weight sharing is carried out between the neural network unit and the twin network unit, and meanwhile, the neural network unit and the twin network unit both comprise a neural network body and a clustering constraint layer.
4. The diabetes hexatyping method according to claim 1, wherein the self-supervision training is performed on the deep learning network model in a manner of acquiring a similarity maximum value and a similarity minimum value between every two data of the sample data by the deep learning network model based on the two-class clustering network structure of the twin network, to obtain the network model for diabetes hexatyping with the self-supervision function, comprising:
obtaining a maximum similarity value and a minimum similarity value between every two data of the sample data in the sample data set through a twin network unit of the deep learning network model; the deep learning network model adopts a binary clustering network structure based on a twin network;
Training the neural network unit corresponding to the twin network unit by utilizing the maximum similarity value and the minimum similarity value to obtain a trained neural network unit; the trained neural network unit is a network model for six types of diabetes.
5. The diabetes hexatyping method of claim 4, wherein obtaining a similarity maximum value and a similarity minimum value between two pairs of sample data in the sample data set by a twin network unit of a deep learning network model comprises:
step A1, inputting a sample data set to a twin network unit;
a2, the twin network unit calculates and obtains the similarity between every two sample data of each sample data of the sample data set, and obtains a similarity value set which corresponds to the sample data set and contains a plurality of similarity values;
a3, extracting a similarity maximum value and a similarity minimum value from the similarity value set;
step A4, marking the maximum value and the minimum value of the similarity with similar labels;
step A5, inputting a sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity labels to a neural network unit;
Step A6, after the sample data set corresponding to the similarity maximum value and the similarity minimum value with the similarity labels is input to a neural network unit, selecting the similarity maximum value and the similarity minimum value again from the rest similarity values of the similarity value set, and obtaining the similarity maximum value and the similarity minimum value in the similarity value set containing the rest similarity values;
a7, marking a similarity label on the maximum similarity value and the minimum similarity value in the similarity value set containing the rest similarity values;
step A8, inputting a maximum similarity value and a minimum similarity value in the similarity value set containing the rest similarity values with the similarity labels into a corresponding sample data set to a neural network unit;
and A9, repeating the steps A6 to A8 until all similarity values in the similarity value set are labeled with similarity labels, and inputting a sample data set corresponding to the maximum similarity value and the minimum similarity value into a neural network.
6. The method of claim 5, wherein training the neural network unit corresponding to the twin network unit using the maximum and minimum similarity values to obtain a trained neural network unit, comprises:
Step B1, the neural network unit receives a group of sample data sets corresponding to the maximum value and the minimum value of the similarity with the similarity label;
step B2, training the neural network once by using the sample data set corresponding to the maximum similarity value and the minimum similarity value with the similarity label;
step B3, after the neural network training is completed once, monitoring whether a sample data set corresponding to the maximum similarity value and the minimum similarity value of the next group of similarity labels is received in real time;
step B4, when a next group of sample data sets corresponding to the maximum similarity value and the minimum similarity value with similarity labels are received, performing neural network training once by using the sample data sets corresponding to the maximum similarity value and the minimum similarity value with similarity labels;
and B5, repeating the steps B1 to B4 until the training of the neural network is completed, and obtaining the trained neural network unit.
7. The diabetes hexatyping method of claim 1, wherein a decoder portion of the DAE in the network model for performing diabetes hexatyping adds data reconstruction losses.
8. The method according to claim 4, wherein the obtaining, by the twin network unit of the deep learning network model, a maximum similarity value and a minimum similarity value between two pairs of sample data in the sample data set further comprises:
When a similarity value set which corresponds to the sample data set and contains a plurality of similarity values is obtained, extracting the plurality of similarity values, and setting a first similarity threshold according to the similarity values;
setting a second similarity threshold using the plurality of similarity values and the first similarity threshold;
setting a difference threshold by using the first similarity threshold and the second similarity threshold;
monitoring a selected similarity maximum value and a selected similarity minimum value in the rest similarity values in the similarity value set in real time;
comparing the maximum value of the similarity with the minimum value of the similarity to obtain a difference value between the maximum value of the similarity and the minimum value of the similarity;
when the difference value between the maximum similarity value and the minimum similarity value is lower than a difference threshold value, sample data adjustment is carried out on a sample data set corresponding to the maximum similarity value and the minimum similarity value, wherein the difference value is lower than the difference threshold value;
and in the process of sample data adjustment of the sample data set corresponding to the similarity maximum value and the similarity minimum value, monitoring the change of the similarity maximum value and the similarity minimum value corresponding to the sample data set corresponding to the similarity maximum value and the similarity minimum value in real time until the difference value of the similarity maximum value and the similarity minimum value corresponding to the sample data set corresponding to the similarity maximum value and the similarity minimum value is lower than the difference value between the similarity maximum value and the similarity minimum value corresponding to the sample data set corresponding to the difference threshold after adjustment, and the difference value between the similarity maximum value and the similarity minimum value is not lower than the difference threshold, wherein the difference value between the similarity maximum value and the similarity minimum value is different from the difference value between the similarity maximum value and the similarity minimum value obtained in history.
9. The diabetes hexatyping method of claim 8, wherein setting a difference threshold using the first similarity threshold and the second similarity threshold comprises:
extracting the first similarity threshold;
sequentially comparing the similarity value in the similarity value set with the first similarity threshold value to obtain a first difference value set;
extracting the second similarity threshold;
sequentially comparing the similarity value in the similarity value set with the second similarity threshold value to obtain a second difference value set;
and setting a difference threshold by using the difference value parameters in the first difference value set and the second difference value set.
10. A depth contrast clustering-based diabetes hexatyping system, characterized in that the diabetes hexatyping system comprises:
the data set construction module is used for constructing a sample data set of senile diabetes by using medical record data;
the self-supervision training module is used for carrying out self-supervision training on the deep learning network model in a mode of acquiring a maximum similarity value and a minimum similarity value between every two data of the sample data through the deep learning network model based on the binary clustering network structure of the twin network, so as to obtain a network model with a self-supervision function and used for six-typing of diabetes;
And the typing execution module is used for inputting the patient information acquired in real time into the trained network model for six types of diabetes mellitus, and determining the diabetes mellitus type of the current patient by utilizing the trained network model for six types of diabetes mellitus.
CN202410044406.3A 2024-01-12 2024-01-12 Diabetes six-typing method and system based on depth contrast clustering Active CN117854709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410044406.3A CN117854709B (en) 2024-01-12 2024-01-12 Diabetes six-typing method and system based on depth contrast clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410044406.3A CN117854709B (en) 2024-01-12 2024-01-12 Diabetes six-typing method and system based on depth contrast clustering

Publications (2)

Publication Number Publication Date
CN117854709A true CN117854709A (en) 2024-04-09
CN117854709B CN117854709B (en) 2024-06-18

Family

ID=90528603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410044406.3A Active CN117854709B (en) 2024-01-12 2024-01-12 Diabetes six-typing method and system based on depth contrast clustering

Country Status (1)

Country Link
CN (1) CN117854709B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784130A (en) * 2021-01-27 2021-05-11 杭州网易云音乐科技有限公司 Twin network model training and measuring method, device, medium and equipment
CN114492768A (en) * 2022-04-06 2022-05-13 南京众智维信息科技有限公司 Twin capsule network intrusion detection method based on small sample learning
CN115982597A (en) * 2023-02-15 2023-04-18 阿维塔科技(重庆)有限公司 Semantic similarity model training method and device and semantic matching method and device
CN116502165A (en) * 2023-04-27 2023-07-28 青岛明思为科技有限公司 Finite sample mechanical fault diagnosis method for self-supervision learning
CN116738297A (en) * 2023-08-15 2023-09-12 北京快舒尔医疗技术有限公司 Diabetes typing method and system based on depth self-coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784130A (en) * 2021-01-27 2021-05-11 杭州网易云音乐科技有限公司 Twin network model training and measuring method, device, medium and equipment
CN114492768A (en) * 2022-04-06 2022-05-13 南京众智维信息科技有限公司 Twin capsule network intrusion detection method based on small sample learning
CN115982597A (en) * 2023-02-15 2023-04-18 阿维塔科技(重庆)有限公司 Semantic similarity model training method and device and semantic matching method and device
CN116502165A (en) * 2023-04-27 2023-07-28 青岛明思为科技有限公司 Finite sample mechanical fault diagnosis method for self-supervision learning
CN116738297A (en) * 2023-08-15 2023-09-12 北京快舒尔医疗技术有限公司 Diabetes typing method and system based on depth self-coding

Also Published As

Publication number Publication date
CN117854709B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
US20200211706A1 (en) Intelligent traditional chinese medicine diagnosis method, system and traditional chinese medicine system
CN113486578B (en) Method for predicting residual life of equipment in industrial process
Alkım et al. A fast and adaptive automated disease diagnosis method with an innovative neural network model
CN111105860A (en) Intelligent prediction, analysis and optimization system for accurate motion big data for chronic disease rehabilitation
CN113723007B (en) Equipment residual life prediction method based on DRSN and sparrow search optimization
US20060129034A1 (en) Medical decision support systems utilizing gene expression and clinical information and method for use
CN115644823B (en) Dynamic prediction and individualized intervention system for rehabilitation effect
CN110890146A (en) Bedside intelligent interaction system for intelligent ward
CN116933046B (en) Deep learning-based multi-mode health management scheme generation method and system
Barhate et al. Analysis of classifiers for prediction of type ii diabetes mellitus
CN115482932A (en) Multivariate blood glucose prediction algorithm based on transfer learning and glycosylated hemoglobin
CN116821809A (en) Vital sign data acquisition system based on artificial intelligence
CN114512239A (en) Cerebral apoplexy risk prediction method and system based on transfer learning
WO2022237162A1 (en) Blood glucose prediction method and application thereof, and blood glucose prediction system
CN115512422A (en) Convolutional neural network facial emotion recognition method and system based on attention mechanism
CN112450944B (en) Label correlation guide feature fusion electrocardiogram multi-classification prediction system and method
US20240013920A1 (en) Medical event prediction using a personalized dual-channel combiner network
CN117854709B (en) Diabetes six-typing method and system based on depth contrast clustering
CN113035348A (en) Diabetes diagnosis method based on GRU feature fusion
CN116313080A (en) Glucose concentration prediction method and device based on transfer learning
Settouti et al. Interpretable classifier of diabetes disease
CN115171896A (en) System and method for predicting long-term death risk of critically ill patient
CN114504298A (en) Physiological feature distinguishing method and system based on multi-source health perception data fusion
CN113470808A (en) Method for artificial intelligence to predict delirium
US20240206821A1 (en) Apparatus and a method for predicting a physiological indicator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant