CN117912640B - Domain increment learning-based depressive disorder detection model training method and electronic equipment - Google Patents

Domain increment learning-based depressive disorder detection model training method and electronic equipment Download PDF

Info

Publication number
CN117912640B
CN117912640B CN202410316692.4A CN202410316692A CN117912640B CN 117912640 B CN117912640 B CN 117912640B CN 202410316692 A CN202410316692 A CN 202410316692A CN 117912640 B CN117912640 B CN 117912640B
Authority
CN
China
Prior art keywords
domain
depressive disorder
sample
class
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410316692.4A
Other languages
Chinese (zh)
Other versions
CN117912640A (en
Inventor
郭艳蓉
陈涛
郝世杰
洪日昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202410316692.4A priority Critical patent/CN117912640B/en
Publication of CN117912640A publication Critical patent/CN117912640A/en
Application granted granted Critical
Publication of CN117912640B publication Critical patent/CN117912640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a training method of a depressive disorder detection model based on domain incremental learning and electronic equipment, and belongs to the technical field of data processing. The method comprises the steps of obtaining a training data set, inputting a current domain and a previous domain, carrying out feature extraction and prediction, dynamically learning class customization thresholds by utilizing JS divergence and class information in the training data set, carrying out sample selection for each class in the training data set by using the corresponding class customization thresholds, carrying out intra-domain alignment on sample selection results, distinguishing sample similarity, and effectively reducing the gap between the fields. Such alignment encourages features extracted from various fields to become more compact, thereby significantly improving the adaptability to new data, alleviating catastrophic forgetfulness, and facilitating improved depressive disorder detection accuracy and efficiency.

Description

Domain increment learning-based depressive disorder detection model training method and electronic equipment
Technical Field
The invention relates to the technical field of data processing, in particular to a training method of a depressive disorder detection model based on domain incremental learning and electronic equipment.
Background
Major Depressive Disorder (MDD) is a common mental health disorder affecting the quality of life of millions of people. Accurate, early diagnosis of depression is critical for timely intervention and effective treatment. In recent years, the application of deep learning technology in the medical field has been significantly advanced, and new possibilities are provided for diagnosis and monitoring of depression. While current deep neural networks exhibit promising performance in MDD detection tasks, these models require access to all available data simultaneously. In the real world, different institutions often collect clinical data at different times, such that access to all data during the initial training phase becomes unavailable. Thus, the current phase turns to Domain Incremental Learning (DIL) to process continuous data streams, which aims to adapt the model to new data without affecting its performance on historical tasks. While this may seem very light to humans, well-trained models often suffer from "catastrophic forgetfulness," i.e., they forget previously learned knowledge when they acquire new information. Existing mainstream DIL methods assume that old data can be easily accessed during model training, either preserving a small portion of the historical data or generating playback techniques through statistical attribute use. However, the actual situation is that "accessibility of old data may be limited" for reasons including privacy issues (e.g. biometric data sets) or unexpected data loss (e.g. data corruption), so that only well-trained models are typically accessible. This makes it possible that it may become impractical to use the raw training data for reference or fine tuning.
Domain incremental learning (Domain INCREMENTAL LEARNING, DIL) is a method in the machine learning Domain for handling tasks that are continuously learned in the ever-changing data Domain. In depressive disorder detection, domain-enhanced learning becomes particularly important because depressive disorder data typically occurs in the form of continuous data streams, which may come from different points in time, sources, or environments, and thus a method is needed to accommodate such constantly changing data. When a model is learning a new data field, it is often forgotten about previously learned information. This is because conventional machine learning models, when receiving new data, can fully adapt themselves to the new data, resulting in forgetting knowledge of the old data. To alleviate the problem of catastrophic forgetfulness, it first learns a threshold for each class based on the difference in probability distribution and based on this selection determines samples that are similar to the previous domain and samples that are dissimilar. These similar samples are used to preserve historical knowledge, further forcing dissimilar samples closer to the samples, thereby mitigating gaps between the samples. To address these challenges, domain-increment learning methods have become critical in the detection of depressive disorders. The present invention allows the model to retain previously learned information while learning new data, thereby alleviating the catastrophic forgetfulness problem. Furthermore, some innovative approaches, such as using dynamically adjusted thresholds and intra-domain alignment, help better handle class imbalance and protect privacy.
To bridge the gap between the actual demand and the domain incremental learning framework, non-data domain incremental learning (DF-DIL) may be utilized. Mainstream DF-DIL methods typically utilize generative model synthesis samples to accommodate new tasks while maintaining performance on previous tasks. However, in practical settings there is typically only one availability of a well-trained previous model, so in practical applications no historical information can be utilized. Adaptation and incremental learning without relying on historical information becomes critical. Another concern is "category imbalance", which often occurs in major depressive disorder MDD data. In fact, the composite data inherits the unbalanced nature of the history data. Synthetic data may have a class distribution similar to previous data, with a limited number of classes of samples and a rich number of classes of samples. This circulatory effect further exacerbates the problem of class imbalance, making these methods less effective in major depressive disorder MDD detection tasks.
Therefore, how to provide a training method for a depressive disorder detection model based on domain incremental learning and an electronic device are the problems to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides a training method of a depressive disorder detection model based on domain incremental learning and an electronic device for solving the above-mentioned problems in the prior art.
In order to achieve the above object, the present invention provides the following technical solutions:
In one aspect, the invention provides a training method of a depressive disorder detection model based on domain incremental learning, comprising the following steps:
Acquiring a training data set;
training the acquired training data set to obtain a feature extraction network model;
obtaining label prediction results on different types of samples in the training data set through the feature extraction network model;
Comparing the difference between the prediction results by using JS divergence;
dynamically learning a class customized threshold value by utilizing the JS divergence and the class information in the training data set;
Sample selection using a threshold customized to each class in the training dataset for the respective class;
carrying out intra-domain alignment on the sample selection result to distinguish sample similarity;
Obtaining a domain alignment loss function according to the sample similarity, and performing domain increment learning through the domain alignment loss function, so as to obtain a trained depressive disorder detection model.
Optionally, the training data set includes depressive user information and non-depressive user information collected from a social platform.
Optionally, the training the obtained training data set to obtain a feature extraction network model is as follows:
Wherein, And/>Through the nth domain/>The trained features extract a network and a classification model; Representing the extracted features; /(I) Representing label predictions; /(I)Representing standard cross entropy loss.
Optionally, obtaining label prediction results on samples of different categories in the training dataset through the feature extraction network model includes:
Predicting real tags The major depressive disorder and the health are decomposed, and the expression is as follows:
Wherein, Are denoted as major depressive disorder classes; /(I)Expressed as a health class;
retraining a data set using feature extraction network model The expression of the extracted features is as follows:
Wherein, Representation by extracting features/>Application to/>And the resulting embedding,/>Is a classifier/>Tag predictions across different classes of samples.
Optionally, the comparing the difference between the predicted results using JS divergence includes:
Wherein, Represent KL divergence,/>And/>Respectively representing the predicted differences between major depressive disorder samples and healthy samples.
Optionally, the dynamically learning the class customization threshold using JS divergence and class information in the training dataset includes:
Wherein, And/>Represent mean and standard deviation,/>Indicating the hyper-parameters that adjust the class customization threshold.
Optionally, the selecting samples by using a threshold customized by the corresponding class for each class in the training data set includes:
Wherein, Representing a sample set of domain-like major depressive disorder species,/>Sample set representing domain dissimilarity major depressive disorder groups,/>Representing a domain-like health class sample set,/>Representing a set of domain dissimilar health class samples,/>Represents the/>Samples in the individual domain predicted to be depressive disorder,/>Represents the/>Samples in the individual domain predicted to be healthy subjects,Expressed therein/>Samples of depressive disorder,/>Represents the/>The samples were tested for health.
Optionally, the performing intra-domain alignment on the sample selection result to distinguish sample similarity includes:
Wherein, Expressed as a sample similar to the previous field,/>Represented as a sample dissimilar to the previous field.
Optionally, the obtaining the domain alignment loss function according to the sample similarity is:
Wherein, Representing maximum mean difference loss,/>Represented as the distance between the similar sample and the dissimilar sample.
MMD: maximum MEAN DISCREPANCY Maximum average difference.
In another aspect, the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of a method for training a depressive disorder detection model based on domain delta learning when executing the computer program.
According to the technical scheme, compared with the prior art, the invention discloses a training method of a depressive disorder detection model based on domain incremental learning and electronic equipment. The class-customized threshold is adaptively learned by calculating the predictive difference between the major depressive disorder class samples and the healthy class samples to distinguish between samples of domain similarity and samples of domain dissimilarity. In addition, these determined domain similarity samples are used to model the data of the previous domain to preserve prior knowledge, effectively reducing the gap between the domains. The alignment process facilitates more compact features extracted from the various domains, thereby significantly enhancing the adaptability to new data, alleviating catastrophic forgetting problems, and facilitating improved depressive disorder detection accuracy and efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the overall framework of the training process of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a training method of a depressive disorder detection model based on domain increment learning and electronic equipment, which comprise the steps of acquiring a training data set, inputting a current domain and a previous domain, extracting and predicting characteristics, dynamically learning class customization thresholds by utilizing JS divergence and class information in the training data set, carrying out sample selection by using corresponding class customization thresholds for each class in the training data set, carrying out intra-domain alignment on sample selection results, distinguishing sample similarity, and effectively reducing the gap between the domains. Such alignment encourages features extracted from various fields to become more compact, thereby significantly improving the adaptability to new data, alleviating catastrophic forgetfulness, and facilitating improved depressive disorder detection accuracy and efficiency.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Referring to fig. 1, the embodiment of the invention discloses a training method of a depressive disorder detection model based on domain incremental learning and electronic equipment, which comprises the following steps:
Acquiring a training data set;
training the acquired training data set to obtain a feature extraction network model;
Obtaining label prediction results on different types of samples in the training data set through a feature extraction network model;
Comparing the difference between the prediction results by using JS divergence;
dynamically learning a class customized threshold value by utilizing the JS divergence and the class information in the training data set;
Sample selection using a threshold customized to each class in the training dataset for the respective class;
Distinguishing sample similarity based on sample selection results;
obtaining a domain alignment loss function according to the sample similarity, and performing domain increment learning through the domain alignment loss function, so as to obtain a trained depressive disorder detection model.
In a particular embodiment, the training data set includes depressive user information and non-depressive user information collected from the social platform.
In particular, many deep learning tasks utilize thresholds to distinguish whether a sample meets certain criteria. Conventional methods typically design a fixed manual threshold, which has proven its effectiveness in certain scenarios. Such a fixed threshold learning paradigm may be established based on prior knowledge or domain expertise. In scenarios where the depression data distribution and characteristics are relatively stable, these fixed thresholds may serve as reliable indicators of the classification or decision process. However, in a dynamic and constantly changing data environment, the underlying data distribution may change over time, and the fixed threshold may not be sufficient to accommodate the dynamic data pattern. The adaptive threshold learning paradigm is therefore used to reduce human interference and is adaptively optimized according to the loss function during training. The method can adapt to the changing data dynamic and changing task requirements. Furthermore, in a specific depressive disorder detection task, a class imbalance problem may result in a situation where a few classes of samples fail to meet a threshold criterion and are subsequently ignored. This dilemma may lead to the exclusion of a few classes of samples in subsequent steps, resulting in retained data consisting mainly of a majority of classes of samples. In addition, this phenomenon exacerbates the problem of class imbalance in the dataset. To this end, the present invention provides an adaptive class-customizing threshold learning strategy that individually customizes thresholds based on unique attributes inherent to the respective class. This approach creates a finer, more efficient sample selection mechanism that meets the specific requirements of each category in a specific task. Thus, it mitigates inherent bias due to class imbalance. In this way, an effective sample identification strategy is successfully established. Not only does such a strategy help to address the challenges presented by class imbalance, but it also ensures that the subsequent modeling process remains robust across all classes without biasing towards any particular class.
Specifically, feature extraction is first performed, and in order to complete adaptive threshold learning, first, the first is usedInput data in personal domain/>Training is carried out to obtain the characteristic extraction network:
(1)
Wherein, And/>Is through the/>Personal domain/>The trained features extract the network and classification model. /(I)Representing the extracted features, for simplicity of discussion, the notation/>And (3) representing. /(I)Representing its label prediction, the abbreviation/>, will be used belowTo represent. /(I)Representing standard cross entropy loss. To facilitate class customization threshold learning, will/>, based on real tagsBreak down into major depressive disorder and healthy people/>. As for history information, due to previous data/>Is not available at all, but only has access to previously trained models/>. Subsequently, the feature extraction network is utilized to extract the new training data/>Extracts features from the model and passes the classifier/>Further generating its tag probability distribution. The method comprises the following steps:
(2)
Wherein, Representation by extracting features/>Application to/>The resulting embedding is denoted/>。/>Is a classifier/>Label predictions across different classes of samples. Next, the JS divergence is used to compare the difference between the two predictions.
(3)
Wherein,Represent KL divergence,/>And/>Respectively representing the predicted differences between major depressive disorder samples and healthy samples.
Specifically, HC is a healthy control sample and MDD is a major depressive disorder sample.
Next, the class customization threshold is dynamically learned using JS divergence and class information, which is used to determine whether the sample is similar to the previous domain, which is specifically defined below.
(4)
Wherein,And/>Represent mean and standard deviation,/>Indicating the hyper-parameters that adjust the class customization threshold.
In one particular embodiment, intra-domain alignment includes the following:
continuous data streaming is a common phenomenon in real scenes. When the model is exposed to the incoming training data, it adjusts its parameters to accommodate the new information. Models tend to forget the previously acquired knowledge when receiving training for new tasks. This phenomenon is known as catastrophic forgetfulness, and when the model is focused on learning the current task, it severely impedes its ability to retain prior domain information. This challenge becomes particularly acute when dealing with areas featuring constantly changing data distribution and dynamically changing tasks. Thus, solving this problem is critical to developing models that can be continuously learned and adapted over time while maintaining stable performance in historical tasks. The model is limited to access only one trained model and no other information is available. This presents a significant challenge to solving the forgetfulness problem. To address this problem, an intra-domain alignment module has been further developed that utilizes a trained model to identify similar and dissimilar sample sets in the new training data. Information in these similar samples is then explored to construct approximations of the feature distribution in the historical data. This approach attempts to reduce the domain difference between the existing available data and the inaccessible history information, thereby mitigating the deleterious effects of forgetting. Details of the in-domain alignment are as follows.
Sample identification identifies data points by predefined criteria or attributes, revealing relevant patterns, relationships, or features according to a particular learning objective. Conventional selection methods typically employ a uniform threshold across all categories. However, the present invention seeks to learn a class-customized threshold in a task, as shown in equation (4). Thus, at a later stage of the method, the model uses the corresponding custom threshold value for sample selection in each class, specifically defined as follows:
(5)
In this way, the domain-like MDD sample set is effectively distinguished Domain dissimilar MDD sample setDomain-like HC sample set/>Sum domain dissimilar HC sample set/>,/>Represents the/>Samples in the individual domain predicted to be depressive disorder,/>Represents the/>Sample in personal domain predicted to be healthy subject,/>Representing the i-th depressive disorder sample therein,/>Indicating the ith healthy subject sample.
(6)
Sign symbolIs a union operation. Wherein/>Expressed as a sample similar to the previous field,/>Represented as a sample dissimilar to the previous field.
In the above manner, the present invention effectively distinguishes samples exhibiting similarity and dissimilarity with the prior art.
Conventional domain-oriented learning methods generally follow a uniform technical route, either playback historical samples or synthesizing new samples based on statistical information. This strategy aims to preserve previously acquired knowledge and alleviate the catastrophic forgetting challenges. However, within the scope of the task, the availability of such information is limited, thus facing challenges in effectively addressing the specific challenges at hand. Thus, the present invention addresses an alternative to this dilemma in situations where historical data access restrictions are limited. In particular, by means ofThe feature distribution of the historical data is simulated. More specifically, existing data is utilized to approximate the feature distribution of the historical samples.
In fact, the fact that,For retaining a priori knowledge and based thereon designing an intra-domain alignment module to encourage/>Samples are closer/>, in feature spaceThereby bridging the gap between the historical data stream and the incoming data stream. Such domain alignment loss is defined as follows:
(7)
Wherein, Represents Maximum average difference loss of Maximum MEAN DISCREPANCY,/>Represented as the distance between the similar sample and the dissimilar sample.
Thus, the invention adopts a unique alignment mechanism to enable the model to adapt to the scene which is inaccessible to the historical data and obtain excellent effects.
In another aspect, the embodiment of the invention discloses an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the steps of a training method of a depressive disorder detection model based on domain incremental learning when executing the computer program.
For the apparatus disclosed in the examples, since it corresponds to the method disclosed in the examples, the description is relatively simple, and the relevant points are referred to in the description of the method section.
Through the above technical scheme, referring to fig. 2, the present invention firstly inputs the current domain and the previous domain, performs feature extraction and prediction, and utilizes the feature extraction network trained on the newly arrived dataAnd previous model/>To derive their respective representations, and then use the method from/>Two classifiers for personal area learning generate corresponding predictions. Next, class customization thresholds are adaptively learned by computing the difference between the two predictions in order to preserve previous information and facilitate class balancing. Thus, the difference between the presentation and the/>, is made by the learning threshold within its categorySamples of individual domain similarity and samples exhibiting different similarities. Furthermore, these determined domain similarity samples are used to model the/>The domain data feature distribution is used for keeping the prior knowledge, so that the gap between the domains is effectively reduced. Such alignment encourages features extracted from various fields to become more compact, thereby significantly improving the adaptability to new data, alleviating catastrophic forgetfulness, and facilitating improved depressive disorder detection accuracy and efficiency. The invention also has an accessibility state, and particularly can adopt a lock symbol to represent the accessibility state of data or a model. When the lock is in the closed state, it indicates that the corresponding data or model is not currently available or available. And when the lock is in an open state, this means that the relevant data or model is available, which the user can access or obtain.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. The training method of the depressive disorder detection model based on domain incremental learning is characterized by comprising the following steps of:
obtaining a training dataset comprising depressive user information and non-depressive user information collected from a social platform;
training the acquired training data set to obtain a feature extraction network model;
obtaining label prediction results on different types of samples in the training data set through the feature extraction network model;
Comparing the difference between the prediction results by using JS divergence;
dynamically learning a class customized threshold value by utilizing the JS divergence and the class information in the training data set;
Sample selection using a threshold customized to each class in the training dataset for the respective class;
carrying out intra-domain alignment on the sample selection result to distinguish sample similarity;
Obtaining a domain alignment loss function according to the sample similarity, and performing domain increment learning through the domain alignment loss function, so as to obtain a trained depressive disorder detection model.
2. The training method of the depressive disorder detection model based on domain incremental learning according to claim 1, wherein the training of the acquired training data set to obtain the feature extraction network model is as follows:
Wherein, And/>The feature extraction network and the classification model are trained through an nth domain D n; /(I)Representing the extracted features; /(I)Representing label predictions; CE (·) represents the standard cross entropy loss.
3. The training method of the depressive disorder detection model based on domain incremental learning according to claim 1, wherein the obtaining of the label prediction results on different types of samples in the training dataset through the feature extraction network model comprises:
Predicting real tags The major depressive disorder and the health are decomposed, and the expression is as follows:
Wherein, Are denoted as major depressive disorder classes; /(I)Expressed as a health class;
The extracted features in the data set X n are retrained by using the feature extraction network model, and the expression is as follows:
Wherein, Representation by extracting features/>The resulting embedding applied to D n,Is a classifier/>Tag predictions across different classes of samples.
4. The training method of the depressive disorder detection model based on domain incremental learning according to claim 1, wherein the comparing the difference between the prediction results using JS divergence comprises:
Wherein D KL (. Cndot.) represents the KL divergence, And/>Respectively representing the predicted differences between major depressive disorder samples and healthy samples.
5. The training method of the depressive disorder detection model based on domain incremental learning as claimed in claim 1, wherein the dynamically learning the class customized threshold using JS divergence and class information in the training dataset comprises:
Where μ (-) and σ (-) represent the mean and standard deviation, α m and α h represent the superparameter that adjusts the class customization threshold.
6. A method of training a depressive disorder detection model based on domain incremental learning as claimed in claim 1, wherein the sample selection using a respective class-tailored threshold for each class in the training dataset comprises:
Wherein, Representing a sample set of domain-like major depressive disorder species,/>Sample set representing domain dissimilarity major depressive disorder groups,/>Representing a domain-like health class sample set,/>Representing a set of domain dissimilar health class samples,/>Samples representing predicted to be depressive disorders in the nth domain,/>Representing samples in the nth domain predicted to be healthy subjects,/>Representing the i-th depressive disorder sample therein,/>Indicating the ith healthy subject sample.
7. The training method of the depressive disorder detection model based on domain incremental learning according to claim 1, wherein the performing intra-domain alignment on the sample selection results to distinguish sample similarity comprises:
Wherein, Expressed as a sample similar to the previous field,/>Represented as a sample dissimilar to the previous field.
8. The training method of the depressive disorder detection model based on domain incremental learning according to claim 1, wherein the obtaining the domain alignment loss function according to the sample similarity is:
Wherein MMD (. Cndot.) represents the maximum mean difference loss, Represented as the distance between the similar sample and the dissimilar sample.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the domain incremental learning based depressive disorder detection model training method according to any one of claims 1 to 8 when the computer program is executed.
CN202410316692.4A 2024-03-20 2024-03-20 Domain increment learning-based depressive disorder detection model training method and electronic equipment Active CN117912640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410316692.4A CN117912640B (en) 2024-03-20 2024-03-20 Domain increment learning-based depressive disorder detection model training method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410316692.4A CN117912640B (en) 2024-03-20 2024-03-20 Domain increment learning-based depressive disorder detection model training method and electronic equipment

Publications (2)

Publication Number Publication Date
CN117912640A CN117912640A (en) 2024-04-19
CN117912640B true CN117912640B (en) 2024-06-25

Family

ID=90689405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410316692.4A Active CN117912640B (en) 2024-03-20 2024-03-20 Domain increment learning-based depressive disorder detection model training method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117912640B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384005A (en) * 2016-09-28 2017-02-08 湖南老码信息科技有限责任公司 Incremental neural network model-based depression prediction method and prediction system
SG10201910452WA (en) * 2019-11-08 2021-06-29 Singapore Management Univ Detection of stress or depression outcomes
WO2021178731A1 (en) * 2020-03-04 2021-09-10 Karl Denninghoff Neurological movement detection to rapidly draw user attention to search results
CN112070777B (en) * 2020-11-10 2021-10-08 中南大学湘雅医院 Method and device for organ-at-risk segmentation under multiple scenes based on incremental learning
CN115359316A (en) * 2022-08-17 2022-11-18 中国科学院计算技术研究所 Incremental learning-based image classification model training method and classification method
CN116597211A (en) * 2023-05-16 2023-08-15 合肥工业大学 Multi-target domain self-adaptive method based on contrast learning and autocorrelation incremental learning
CN117393163A (en) * 2023-11-01 2024-01-12 杭州师范大学钱江学院 Social network user depression detection method and system based on multi-mode information fusion
CN117540822A (en) * 2023-11-02 2024-02-09 西北工业大学 Federal type incremental learning method, equipment and storage medium across mobile edge network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Using digital phenotyping to capture depression symptom variability: detecting naturalistic variability in depression symptoms across one year using passively collected wearable movement and sleep data;Price, GD;《TRANSLATIONAL PSYCHIATRY》;20231209;第13卷;全文 *
青少年24小时活动行为与抑郁、焦虑及抑郁焦虑共病症状的关联研究;程金群;《优秀硕士论文》;20211225;全文 *

Also Published As

Publication number Publication date
CN117912640A (en) 2024-04-19

Similar Documents

Publication Publication Date Title
Tang et al. Multimodal emotion recognition using deep neural networks
Wang et al. Graph convolutional nets for tool presence detection in surgical videos
Liu et al. A hierarchical visual model for video object summarization
Dara et al. Clustering unlabeled data with SOMs improves classification of labeled real-world data
Guan et al. Identifying mislabeled training data with the aid of unlabeled data
CN113139664B (en) Cross-modal migration learning method
US20210056362A1 (en) Negative sampling algorithm for enhanced image classification
CN111160959B (en) User click conversion prediction method and device
Iqbal et al. Learning feature fusion strategies for various image types to detect salient objects
Nunavath et al. Deep neural networks for prediction of exacerbations of patients with chronic obstructive pulmonary disease
Shehu et al. Lateralized approach for robustness against attacks in emotion categorization from images
Jia et al. Imbalanced disk failure data processing method based on CTGAN
Liang et al. A supervised figure-ground segmentation method using genetic programming
Reyes-Nava et al. Using deep learning to classify class imbalanced gene-expression microarrays datasets
CN117912640B (en) Domain increment learning-based depressive disorder detection model training method and electronic equipment
Annam et al. Emotion-Aware Music Recommendations: A Transfer Learning Approach Using Facial Expressions
Nasfi et al. A novel feature selection method using generalized inverted Dirichlet-based HMMs for image categorization
Parker et al. Nonlinear time series classification using bispectrum‐based deep convolutional neural networks
Zhuang et al. Non-exhaustive learning using gaussian mixture generative adversarial networks
Arumugam et al. Feature selection based on MBFOA for audio signal classification under consideration of Gaussian white noise
CN114595336A (en) Multi-relation semantic solution model based on Gaussian mixture model
Hsiao Signal discrimination using category-preserving bag-of-words model for condition monitoring
Gasmi Improving bert-based model for medical text classification with an optimization algorithm
Salau et al. Advancing Preauthorization Task in Healthcare: An Application of Deep Active Incremental Learning for Medical Text Classification
Dhanusha et al. Chaotic chicken swarm optimization-based deep adaptive clustering for alzheimer disease detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant