CN112826442A - Psychological state identification method and equipment based on fundus images - Google Patents

Psychological state identification method and equipment based on fundus images Download PDF

Info

Publication number
CN112826442A
CN112826442A CN202011618426.5A CN202011618426A CN112826442A CN 112826442 A CN112826442 A CN 112826442A CN 202011618426 A CN202011618426 A CN 202011618426A CN 112826442 A CN112826442 A CN 112826442A
Authority
CN
China
Prior art keywords
image
parameters
psychological state
feature information
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011618426.5A
Other languages
Chinese (zh)
Inventor
付萌
郭立平
邱庭轩
熊健皓
赵昕
和超
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202011618426.5A priority Critical patent/CN112826442A/en
Publication of CN112826442A publication Critical patent/CN112826442A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Ophthalmology & Optometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a psychological state identification method and equipment based on fundus images, wherein the method comprises the steps of obtaining retinal vessel caliber parameters and fundus images; extracting feature information from the fundus image by using a feature extraction network; obtaining fusion characteristic information according to the characteristic information and the parameters; and identifying the fusion characteristic information by using an output network to obtain an identification result about the psychological state.

Description

Psychological state identification method and equipment based on fundus images
Technical Field
The invention relates to the field of image recognition, in particular to a psychological state recognition method and device based on fundus images.
Background
Psychological health problems are considered to be mainly due to the disorder of psychological activities, behaviors and nervous system functions, which are mainly characterized by the interaction of external reasons such as family and social environment and internal reasons such as physiological heritage factors and neuro-biochemical factors of patients. Nowadays, mental health problems become increasingly prominent social problems, and common mental health problems are depression, anxiety and the like. The increased risk of cardiovascular disease in people with depression and anxiety prevents the development of the disease state in a critical step in timely screening and guiding patients to attend medical advice.
The existing psychological state assessment scheme mainly comprises two types, wherein the first type is that the psychological state is analyzed through human physiological signals, such as a heart rate signal, a blood oxygen saturation signal and the like, the physiological signals are easily interfered by the outside, and for example, the emotional instability of a human can be obviously changed, so that the analysis result is inaccurate; the second category is to evaluate the psychological state by means of a psychological assessment scale, which is easily influenced by subjective factors, in order to improve accuracy, a large number of questions and answer options are usually set in the scale, and a subject needs to spend a large amount of time answering the questions in the scale, resulting in low assessment efficiency.
Disclosure of Invention
In view of the above, the present invention provides a psychological state recognition method based on fundus images, including:
obtaining retinal vessel caliber parameters and fundus images;
extracting feature information from the fundus image by using a feature extraction network;
obtaining fusion characteristic information according to the characteristic information and the parameters;
and identifying the fusion characteristic information by using an output network to obtain an identification result about the psychological state.
The invention also provides a psychological state recognition model training method, which comprises the following steps:
acquiring training data, wherein the training data comprises retinal vessel caliber parameters, fundus images and psychological state labels;
and training a machine learning model by using the training data, wherein the model comprises a feature extraction network, a fusion module and an output network, the feature extraction network is used for extracting feature information of the fundus image, the fusion module is used for obtaining fusion feature information according to the feature information and the parameters, the output network is used for identifying the fusion feature information to obtain an identification result about a psychological state, and the model optimizes parameters according to the difference between the identification result and the psychological state label.
Optionally, the obtaining of the retinal vessel caliber parameters specifically includes:
segmenting a optic disc image in the fundus image using a segmentation network;
intercepting a partial image in the fundus image according to the position of the optic disc image;
segmenting a blood vessel image in the partial image by utilizing a segmentation network;
and determining the retinal vessel caliber parameter according to the vessel image.
Optionally, identifying the retinal vessel caliber parameter according to the vessel image specifically includes:
determining a disc center position and a disc radius in the disc image;
determining a ring-shaped region of the outer periphery of the optic disc image in the partial image based on the optic disc center position and the optic disc radius;
and determining the retinal vessel caliber parameter according to the vessel image in the annular region.
Optionally, obtaining the fused feature information according to the feature information and the parameter specifically includes: expanding the parameters to increase the occupation ratio of the parameters in the fusion characteristic information; and splicing the feature information and the expanded parameters to obtain the fusion feature information.
Optionally, the output network includes a classifier, configured to classify the fused feature information and output a classification result about a psychological state.
Optionally, the classification result comprises at least one of a classification result of depression, a classification result of anxiety, a classification result of mental health risk.
Optionally, the retinal vessel caliber parameters include a central retinal artery diameter equivalent and a central retinal vein diameter equivalent.
Accordingly, the present invention provides a psychological state identifying device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the mental state recognition method described above.
Accordingly, the present invention provides a mental state recognition model training apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform the above mental state recognition model training method.
The psychological state identification method and the equipment provided by the invention utilize the neural network to extract the characteristic information of the fundus image, fuse the characteristic information of the image with the retinal vessel diameter parameter, and then utilize the neural network to identify the fused characteristic information to obtain the psychological state identification result.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a fundus image;
FIG. 2 is a flowchart of a mental state recognition method according to an embodiment of the present invention;
FIG. 3 is a process diagram of a mental state recognition method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a blood vessel caliber parameter calculation method in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention provides a psychological state recognition model training method, which can be executed by electronic equipment such as a computer or a server and the like, and comprises the following steps:
S1A, acquiring training data, wherein the training data comprises retinal vessel diameter parameters, fundus images and psychological state labels. The fundus image is the retinal image shown in fig. 1 taken by a fundus camera, and contains abundant retinal vascular information, which can be used to assess the health of the microcirculation. The results of the present study indicate that psychological symptoms such as depression and anxiety are associated with retinal vessel caliber in adolescent and early adult populations. Depression and anxiety symptoms are also associated with widening of the retinal arteriole caliber after correction for other cardiovascular risk factors. Given that depression and anxiety symptoms are related to measurable signs in the retinal microvasculature in early life, the present solution will predict the mental health risk of the subject by extracting features of the retinal image.
Considering that the accuracy of judging the psychological state only through the fundus image is not high enough, the scheme also introduces the retinal vessel diameter parameter as a more intuitive physiological parameter. The caliber parameter can be the diameter of the retinal artery and vein or the ratio of the two, etc. Methods for measuring retinal vessel caliber parameters can be divided into two categories, namely direct and indirect measurements. The direct measurement method is that the blood vessel boundary in the fundus image is identified manually, and then the width of the blood vessel is measured; indirect measurement means that blood vessels are identified in fundus images by means of a computer image recognition algorithm and the size is measured, and then the retinal caliber equivalent parameter is calculated by using the Parr-Hubbard formula or a modified Parr-Hubbard-Knudtson formula.
In a particular embodiment, a central retinal artery equivalent value (CRAE) and a central retinal vein equivalent value (CRVE) are used, wherein the central retinal artery equivalent value and the central retinal vein equivalent value represent the diameters of the retinal arteries and veins, respectively. The specific calculation method is as follows:
Figure BDA0002875491000000041
Figure BDA0002875491000000042
wherein WcThe artery is central retinal arteryDiameter equivalent value, WcThe vein is the diameter equivalent of the central retinal vein, where w1A thinner arterial or venous branch diameter; w is a2With thicker arterial or venous branch diameters.
The psychological state label is used for representing the psychological state of the photographed person, such as whether the mind is healthy or not, whether the mind is depression or not, whether the mind is anxiety disorder or not, and also can be more detailed grading information, such as the risk level (degree) of the mental health, the depression level (degree), the anxiety level (degree) and the like. Psychological diagnosis may be used to determine the psychological state, such as by determining the label content via a mental health scale. The labels described herein may be labels for classification tasks or labels for regression prediction. The label for classification usually adopts 0 or 1 to represent the corresponding category, and the label content for regression prediction is numerical value, i.e. the corresponding grade or degree is represented by numerical value.
And S2A, training the machine learning model by using the training data. A large amount of training data are adopted to train the model, each piece of training data respectively comprises a fundus image, retinal vessel diameter parameters and a psychological state of a person, the model is configured to output a recognition result according to the fundus image and the retinal vessel diameter parameters, and then parameters of the model are optimized according to the difference between the recognition result and the psychological state label.
Specifically, the model comprises a feature extraction network (convolutional neural network) for extracting feature information of the fundus image, a fusion module for obtaining fusion feature information according to the feature information and parameters, and an output network for identifying the fusion feature information to obtain an identification result about the psychological state.
As for the fusion module, there are various fusion modes that can be specifically adopted, for example, the feature information and the parameter can be directly spliced. By way of example, the feature information is typically a high-dimensional vector, denoted as X, X ═ X1, X2, … … X1024]Representing it as 1024-dimensional feature vector, the vessel caliber parameter is also considered as a vector, such as [ W ]cArtery, WcVein]It is represented as a 2-dimensional feature vector. Directly connect thisSplicing the two vectors to obtain fusion characteristic information of [ X, WcArtery, WcVein]。
The dimension of the characteristic information extracted from the fundus image is usually much higher than the dimension of the blood vessel diameter parameter, and the direct splicing mode causes the ratio of the blood vessel diameter parameter in the final fusion characteristic information to be too small, so that the model cannot effectively use the characteristics of the blood vessel diameter parameter. In order to solve the problem, a preferable fusion mode can be adopted, namely, parameters are expanded to increase the ratio of the parameters in the fusion characteristic information; and splicing the feature information and the expanded parameters to obtain the fusion feature information. The expansion may be to repeat and use these parameters for multiple times, and the number of times of repetition may be set according to the actual conditions such as the dimensionality of the characteristic information, for example, if the parameters are set to be repeated and used for three times, the obtained fusion characteristic information is [ X, W [ ]cArtery, WcVeins, WcArtery, WcVeins, WcArtery, WcVein]I.e. the vessel caliber parameter is used three times. Therefore, the model can use the vessel diameter parameters more effectively, and the model identification accuracy is improved.
The model can be performed by a classification task or a regression prediction task, so that the finally output recognition result can be a classification result or a regression prediction result. The output network of the model for executing the classification task comprises a classifier for classifying the fusion characteristic information and outputting a classification result about the psychological state.
After training, the model can be used to identify (predict) the mental state. The present embodiment provides a preferred psychological state recognition method, as shown in fig. 2, the method includes the following steps:
S1B, obtaining retinal vessel diameter parameters and fundus images. Similar to the training process, the two input data may be obtained with reference to step S1A. In the present embodiment, the central state is unknown, and is the final recognition result.
As a preferred embodiment, the present embodiment uses the retinal central blood vessel diameter parameter, and the above-described parameter is obtained from the fundus image. Referring to fig. 3, the present solution only needs to obtain the fundus image 31 of the user, and the optic disc image is segmented in the fundus image 31 by using the segmentation network, specifically, the optic disc part in the fundus image can be segmented by using the classical segmentation network such as the Unet, the deep lab series, and the like, so as to obtain the optic disc segmented fundus image 32, in which the optic disc image is marked by the dotted line segment.
The blood vessel image is then segmented in the fundus image using a segmentation network. In the present embodiment, in order to reduce the amount of calculation for dividing blood vessels, a fundus image is first clipped. Specifically, a partial image is cut out from the fundus image based on the position of the optic disc image, and in this embodiment, the optic disc image is approximated to a circular shape, the central point position (x, y) and the radius value r are recognized, and a square image (partial image) having a side length of 8r is cut out from the fundus image with the central point position (x, y) as the center. In other embodiments, the blood vessel segmentation may be performed directly on the fundus image without performing the cropping.
Further, a segmentation network is used to segment the vessel image in the cropped partial image. In the present embodiment, another segmentation network is used to segment the blood vessels in the fundus image, resulting in a blood vessel segmentation result image. Since the present embodiment requires the use of artery and vein caliber parameters, the segmentation network is configured to distinguish between segmented arteries and veins.
The segmentation network used in the above process should be trained before use, and the training method for the segmentation network of the optic disc and the blood vessel can adopt the prior art, which is not described herein again.
And finally, determining the retinal vessel diameter parameter according to the vessel image. Specifically, a ring-shaped area of the outer periphery of the optic disc image is determined in the image based on the optic disc center position and the optic disc radius, see the fundus image 33 in fig. 3 which marks the ring-shaped area; the retinal vessel caliber parameter is then determined from the vessel image within the annular region, see the vessel image 34 in the area of the truncated annulus in fig. 3.
In the following, a preferred technical blood vessel diameter parameter manner is introduced with reference to fig. 4, in this embodiment, a central point position (x, y) of a video disc is taken as a circle center, 2r and 3r are taken as radii to determine a circular region (a solid line circular ring in fig. 4), a circular ring part between the two circular regions is taken as a caliber measuring region, and n circles (a dotted line circular ring in the solid line circular ring in fig. 4) are uniformly formed in the circular ring region with (x, y) as the center, each circle intersects an artery and a vein blood vessel in the circular ring, a blood vessel diameter at the intersection is taken as a sampling tube diameter of the blood vessel, and after n tube diameter sampling values of the blood vessel are obtained, an average value is taken, and a tube diameter value of the blood vessel is obtained. By the method, all the vessel diameter values required in the annular area can be obtained. And then, calculating the retinal caliber equivalent parameters by using an improved Parr-Hubbard-Knudtson formula.
The diameter equivalent value is calculated by using the formula, firstly, the thickest six arterial blood vessels and the thickest six venous blood vessels are selected in the circular ring area and are respectively used for calculating the arterial vein equivalent value, the specific calculation method is that the thickest and the thinnest blood vessel diameters are selected in an iterative mode, and the calculation is carried out by using the formula until only one numerical value is left. For example, if the diameter of the six largest artery is selected and arranged from large to small as 100,90,80,70,60 and 50 microns, the formula is used to select 100 and 50 first
Figure BDA0002875491000000061
w1=100、w2Calculated as 50, 98.4 microns, while 90 and 60 gave 95.2 microns and 80 and 70 gave 93.5 microns, giving three values of 98.4,95.2, and 93.5, respectively; then, carrying out the next iteration, and calculating by using the results 98.4 and 93.5 of the previous iteration to obtain 119.4 micrometers; and then entering the next iteration to obtain 134.4 microns by using 119.4 and 95.2 as the final equivalent value of the central retinal artery. The calculation of the vein parameters is similar and will not be described here.
In addition, it should be noted that the above-mentioned method for measuring the vessel diameter parameter is only an optional scheme and is not the only feasible scheme, and the present invention may also use the existing method for measuring the vessel diameter parameter to calculate the retinal central vessel diameter parameter.
And S2B, extracting characteristic information of the fundus image by using a characteristic extraction network, and extracting the characteristics of the whole fundus image by using a convolutional neural network in the embodiment to obtain high-dimensional characteristic vectors. In other embodiments, feature extraction may also be performed on only a portion of the fundus image using a feature extraction network, such as feature extraction on only a cut-out partial image rather than the entire fundus image; and on the basis of carrying out feature extraction on the whole fundus image, adding a feature extraction network and carrying out feature extraction on the cut partial image.
And S3B, obtaining the fusion characteristic information according to the characteristic information and the parameters. Specifically, reference is made to the fusion scheme in the above training method embodiment, which is not described herein again.
And S4B, identifying the fusion characteristic information by using the output network to obtain an identification result about the psychological state. The present embodiment employs a classifier, and the classification result output here may be at least one of a classification result of depression, a classification result of anxiety, and a classification result of mental health risk according to the label situation configured at the time of training.
The psychological state identification method provided by the embodiment of the invention utilizes the neural network to extract the characteristic information of the fundus image, fuses the characteristic information of the image with the retinal vessel diameter parameter, and then utilizes the neural network to identify the fused characteristic information to obtain the psychological state identification result.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications can be made without departing from the scope of the invention.

Claims (10)

1. A psychological state recognition method based on a fundus image, characterized by comprising:
obtaining retinal vessel caliber parameters and fundus images;
extracting feature information from the fundus image by using a feature extraction network;
obtaining fusion characteristic information according to the characteristic information and the parameters;
and identifying the fusion characteristic information by using an output network to obtain an identification result about the psychological state.
2. A mental state recognition model training method is characterized by comprising the following steps:
acquiring training data, wherein the training data comprises retinal vessel caliber parameters, fundus images and psychological state labels;
and training a machine learning model by using the training data, wherein the model comprises a feature extraction network, a fusion module and an output network, the feature extraction network is used for extracting feature information from the fundus image, the fusion module is used for obtaining fusion feature information according to the feature information and the parameters, the output network is used for identifying the fusion feature information to obtain an identification result about a psychological state, and the model optimizes parameters of the model according to the difference between the identification result and the psychological state label.
3. The method according to claim 1 or 2, wherein obtaining retinal vessel caliber parameters specifically comprises:
segmenting a optic disc image in the fundus image using a segmentation network;
intercepting a partial image in the fundus image according to the position of the optic disc image;
segmenting a blood vessel image in the partial image by utilizing a segmentation network;
and determining the retinal vessel caliber parameter according to the vessel image.
4. The method according to claim 3, wherein identifying the retinal vessel caliber parameters from the vessel images specifically comprises:
determining a disc center position and a disc radius in the disc image;
determining an annular region of the outer periphery of the optic disc image in the partial image based on the optic disc center position and the optic disc radius;
and determining the retinal vessel caliber parameter according to the vessel image in the annular region.
5. The method according to claim 1 or 2, wherein obtaining fused feature information according to the feature information and the parameters specifically comprises:
expanding the parameters to increase the proportion of the parameters in the fusion characteristic information;
and splicing the feature information and the expanded parameters to obtain the fusion feature information.
6. The method of claim 1, wherein the output network comprises a classifier for classifying the fused feature information and outputting a classification result regarding a psychological state.
7. The method of claim 6, wherein the classification result comprises at least one of a classification result of depression, a classification result of anxiety, and a classification result of mental health risk.
8. A method according to any one of claims 1-7 wherein the retinal vessel caliber parameters include a central retinal artery diameter equivalent and a central retinal vein diameter equivalent.
9. A psychological state identifying device, characterized by comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the one processor to cause the at least one processor to perform a mental state recognition method according to any one of claims 1 and 3-8.
10. A mental state recognition model training apparatus, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a mental state recognition model training method according to any of claims 2-8.
CN202011618426.5A 2020-12-31 2020-12-31 Psychological state identification method and equipment based on fundus images Pending CN112826442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011618426.5A CN112826442A (en) 2020-12-31 2020-12-31 Psychological state identification method and equipment based on fundus images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011618426.5A CN112826442A (en) 2020-12-31 2020-12-31 Psychological state identification method and equipment based on fundus images

Publications (1)

Publication Number Publication Date
CN112826442A true CN112826442A (en) 2021-05-25

Family

ID=75925613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011618426.5A Pending CN112826442A (en) 2020-12-31 2020-12-31 Psychological state identification method and equipment based on fundus images

Country Status (1)

Country Link
CN (1) CN112826442A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620384A (en) * 2022-12-19 2023-01-17 北京鹰瞳科技发展股份有限公司 Model training method, fundus image prediction method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376621A (en) * 2018-09-30 2019-02-22 北京七鑫易维信息技术有限公司 A kind of sample data generation method, device and robot
CN109829942A (en) * 2019-02-21 2019-05-31 韶关学院 A kind of automatic quantization method of eye fundus image retinal blood vessels caliber
WO2020080275A1 (en) * 2018-10-15 2020-04-23 国立大学法人大阪大学 Medicine for improving or preventing symptoms related to retina and/or photoreception and method for screening substance improving or preventing symptoms related to retina and/or photoreception
WO2020118160A1 (en) * 2018-12-07 2020-06-11 University Of Miami Systems and method for detecting cognitive impairment
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
CN111402184A (en) * 2018-12-13 2020-07-10 福州依影健康科技有限公司 Method and system for realizing remote fundus screening and health service
CN111507932A (en) * 2019-01-31 2020-08-07 福州依影健康科技有限公司 High-specificity diabetic retinopathy characteristic detection method and storage equipment
CN111655151A (en) * 2018-01-25 2020-09-11 国立大学法人大阪大学 Pressure state detection method and pressure detection device
JP6779446B1 (en) * 2019-08-20 2020-11-04 株式会社アウレオ Eye strain tester

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111655151A (en) * 2018-01-25 2020-09-11 国立大学法人大阪大学 Pressure state detection method and pressure detection device
CN109376621A (en) * 2018-09-30 2019-02-22 北京七鑫易维信息技术有限公司 A kind of sample data generation method, device and robot
WO2020080275A1 (en) * 2018-10-15 2020-04-23 国立大学法人大阪大学 Medicine for improving or preventing symptoms related to retina and/or photoreception and method for screening substance improving or preventing symptoms related to retina and/or photoreception
WO2020118160A1 (en) * 2018-12-07 2020-06-11 University Of Miami Systems and method for detecting cognitive impairment
CN111402184A (en) * 2018-12-13 2020-07-10 福州依影健康科技有限公司 Method and system for realizing remote fundus screening and health service
CN111507932A (en) * 2019-01-31 2020-08-07 福州依影健康科技有限公司 High-specificity diabetic retinopathy characteristic detection method and storage equipment
CN109829942A (en) * 2019-02-21 2019-05-31 韶关学院 A kind of automatic quantization method of eye fundus image retinal blood vessels caliber
JP6779446B1 (en) * 2019-08-20 2020-11-04 株式会社アウレオ Eye strain tester
CN111340789A (en) * 2020-02-29 2020-06-26 平安科技(深圳)有限公司 Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LING-JUN LI: "Antenatal Mental Health and Retinal Vascular Caliber in Pregnant Women", TVST, vol. 2, no. 2, pages 1 - 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620384A (en) * 2022-12-19 2023-01-17 北京鹰瞳科技发展股份有限公司 Model training method, fundus image prediction method and device

Similar Documents

Publication Publication Date Title
US20200155106A1 (en) Systems and methods for estimating healthy lumen diameter and stenosis quantificaton in coronary arteries
EP3730040A1 (en) Method and apparatus for assisting in diagnosis of cardiovascular disease
Saba et al. Automatic detection of papilledema through fundus retinal images using deep learning
US20190221313A1 (en) Diagnosis assistance system and control method thereof
KR20200005406A (en) Diagnosis assistance system
Leontidis et al. A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images
WO2019180742A1 (en) System and method for retinal fundus image semantic segmentation
KR20200023029A (en) Diagnosis assistance system and control method thereof
Hu et al. Automatic artery/vein classification using a vessel-constraint network for multicenter fundus images
CN110874409A (en) Disease grading prediction system, method, electronic device and readable storage medium
CN115578783B (en) Device and method for identifying eye diseases based on eye images and related products
CN111789572A (en) Determining hypertension levels from retinal vasculature images
EP4045138A1 (en) Systems and methods for monitoring the functionality of a blood vessel
CN113850812A (en) Fundus arteriovenous image segmentation method, device and equipment
CN112826442A (en) Psychological state identification method and equipment based on fundus images
US11670323B2 (en) Systems and methods for detecting impairment of an individual
US20220378300A1 (en) Systems and methods for monitoring the functionality of a blood vessel
JP2021520250A (en) Systems and methods for detecting fluid flow
US20220061920A1 (en) Systems and methods for measuring the apposition and coverage status of coronary stents
CN112259232A (en) VTE risk automatic evaluation system based on deep learning
US20200185110A1 (en) Computer-implemented method and an apparatus for use in detecting malingering by a first subject in one or more physical and/or mental function tests
EP3719806A1 (en) A computer-implemented method, an apparatus and a computer program product for assessing performance of a subject in a cognitive function test
Shyamalee et al. Automated Tool Support for Glaucoma Identification with Explainability Using Fundus Images
Stabingis et al. Automatization of eye fundus vessel width measurements
CN115497621A (en) Old person cognitive status evaluation system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination