CN115393274A - Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium - Google Patents

Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium Download PDF

Info

Publication number
CN115393274A
CN115393274A CN202210858288.0A CN202210858288A CN115393274A CN 115393274 A CN115393274 A CN 115393274A CN 202210858288 A CN202210858288 A CN 202210858288A CN 115393274 A CN115393274 A CN 115393274A
Authority
CN
China
Prior art keywords
feature
image
target
features
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210858288.0A
Other languages
Chinese (zh)
Inventor
康雁
曾学强
卢佳锡
缪晓强
阿斯木
杨英健
郭英委
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202210858288.0A priority Critical patent/CN115393274A/en
Publication of CN115393274A publication Critical patent/CN115393274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4058Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
    • A61B5/4064Evaluating the brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Quality & Reliability (AREA)
  • Psychology (AREA)
  • Neurosurgery (AREA)
  • Physiology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The disclosure relates to a method and a device for detecting functional impairment of cranial nerves, an electronic device and a storage medium. The method for detecting the brain nerve functional damage comprises the following steps: acquiring a brain tissue area of a brain perfusion image; extracting dynamic image characteristics of the brain tissue area; obtaining a first feature aiming at a first target and a second feature aiming at a second target based on the dynamic image feature, wherein the first target is whether cerebral apoplexy occurs, and the second target is an evaluation index of the degree of cerebral function damage; determining whether the cranial nerve function is impaired based on the first and second features. The embodiment of the disclosure can realize automatic detection of the cranial nerve functional damage and improve the detection precision.

Description

Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of medical image processing technologies, and in particular, to a method and an apparatus for detecting functional impairment of cranial nerves, an electronic device, and a storage medium.
Background
Stroke has become the second leading cause of death worldwide. Most surviving patients cannot live independently and the risk of other neurological sequelae (such as dementia) increases. The sudden, severe and unpredictable nature of stroke imposes a severe physiological and psychological burden on patients and family members. The clinical application proves that the cerebral apoplexy can cause the neurological function defect, and the space occupying effect and the cerebral edema which are generated in serious cases can cause the cerebral hernia and death. At present, the national institute of health Stroke Scale (NIH Stroke Scale, NIHSS) is used for evaluating the damage degree of cranial nerve functions, the score range is 0-42 points, when the score is 0, the brain nerve damage is normal, and when the score is higher, the brain nerve damage is more serious. The assessment of the NIHSS score is carried out in a clinical mode by frequently adopting a questionnaire, the time consumption is long, and related researches show that even if the score is 0, the condition that the stroke does not exist in a patient is not represented, so that the condition of misjudgment exists. If a scheme can be provided, the cranial nerve function damage can be automatically detected from multiple angles, so that the detection time can be shortened, and the detection precision can be improved.
Disclosure of Invention
The embodiment of the disclosure can realize automatic detection of the degree of the nerve function damage and improve the detection precision.
According to an aspect of the present disclosure, there is provided a method for detecting brain nerve function damage, including:
acquiring a brain tissue area of a brain perfusion image;
extracting dynamic image characteristics of the brain tissue area;
obtaining a first feature aiming at a first target and a second feature aiming at a second target based on the dynamic image feature, wherein the first target is whether cerebral apoplexy occurs, and the second target is an evaluation index of the degree of cerebral function damage;
determining whether the cranial nerve function is impaired based on the first and second features.
In some possible embodiments, the extracting the dynamic image feature of the brain tissue region includes:
respectively executing feature extraction processing on the brain tissue area of the brain perfusion image at each moment to obtain moment image features at each moment;
aiming at the cerebral perfusion image, respectively combining the image characteristics of the cerebral tissue area at each moment to obtain the dynamic image characteristics of the cerebral perfusion image;
and/or, before extracting the dynamic image feature of the brain tissue region, the method further comprises: and performing moving smoothing treatment on each pixel point of the cerebral perfusion image in time dimension.
In some possible embodiments, the deriving, based on the moving image feature, a first feature for a first target and a second feature for a second target includes:
selecting a first feature corresponding to a first feature item and a second feature corresponding to a second feature item from the dynamic image features;
and/or
Selecting a first sub-feature corresponding to a first feature item and a second sub-feature corresponding to a second feature item from the dynamic image features;
performing dimension reduction processing on the dynamic image features to obtain dimension reduction features;
and obtaining the first feature based on the first sub-feature and the dimension reduction feature, and obtaining the second feature based on the second sub-feature and the dimension reduction feature.
In some possible embodiments, the determining whether the cranial nerve function is impaired based on the first feature and the second feature comprises:
and if the first characteristic and the second characteristic do not meet the first condition, obtaining a third characteristic by using the first characteristic and the second characteristic, and judging whether the cranial nerve function damage exists or not by using the third characteristic.
In some possible embodiments, the obtaining a third feature by using the first feature and the second feature, and determining whether the cranial nerve function damage exists by using the third feature includes:
acquiring a first probability corresponding to the first feature and the first target;
acquiring a second probability corresponding to the second feature and the second target;
obtaining the third feature based on the first probability and the second probability;
determining whether the cranial nerve function impairment is present based on the third feature.
In some possible embodiments, before obtaining the first feature for the first target and the second feature for the second target based on the moving image feature, determining an optimal feature combination strategy for extracting the first feature and the second feature further includes:
acquiring dynamic image characteristics and target variables of a brain perfusion image in a brain perfusion image set, wherein the target variables comprise a first target and a second target;
performing feature processing on the dynamic image features based on a preset feature selection method and at least two feature dimension reduction methods to obtain first selection features and at least two groups of first dimension reduction features;
performing feature combination processing on the first selected feature and the first dimension reduction feature based on a plurality of combination strategies to obtain a plurality of combination features;
and evaluating the combined features by utilizing a classification model and the target variable, and determining a combined strategy corresponding to the combined feature with the highest classification score as the optimal feature combined strategy.
In some possible embodiments, the method further comprises determining the preset feature selection method, which comprises:
based on at least two feature selection methods, screening out a first image feature which meets the selection condition of the feature selection method from the significant features of the dynamic image features;
and evaluating the first image characteristics obtained by each characteristic selection method by using the classification model, and determining the characteristic selection method corresponding to the first image characteristic with the highest classification score as the preset characteristic selection method.
According to a second aspect of the present disclosure, there is provided a cranial nerve function injury detection device, comprising:
the acquisition module is used for acquiring a brain tissue area of the brain perfusion image;
the extraction module is used for extracting the dynamic image characteristics of the brain tissue area;
the characteristic processing module is used for obtaining a first characteristic aiming at a first target and a second characteristic aiming at a second target on the basis of the dynamic image characteristic, wherein the first target is whether cerebral apoplexy occurs, and the second target is an evaluation index of the degree of cerebral function damage;
a determination module for determining whether the cranial nerve function is impaired based on the first and second features.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of the first aspects.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any one of the first aspects.
In the embodiment of the disclosure, corresponding feature information can be extracted from the dynamic image features of the cerebral perfusion image according to different targets, and whether the cerebral nerve function is damaged or not can be determined by combining the feature information of the different targets. The dynamic image characteristics of the embodiment of the disclosure can accurately express blood flow information, and meanwhile, the condition of brain function damage is evaluated through multi-angle characteristic information, so that automatic detection cannot be realized, and the detection precision can also be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow diagram of a method of brain nerve functional impairment detection according to an embodiment of the present disclosure;
fig. 2 shows a flow chart of extracting dynamic image features of the brain tissue region according to an embodiment of the present disclosure;
FIG. 3 illustrates a flow chart of a method of determining a feature item corresponding to a target in accordance with an embodiment of the disclosure;
FIG. 4 illustrates a flow diagram for determining an optimal feature combination strategy according to an embodiment of the present disclosure;
fig. 5 shows a block diagram of a brain nerve functional impairment detection apparatus according to an embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 7 illustrates a block diagram of another electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a variety or any combination of at least two of a variety, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The main body of the brain nerve function damage detection method may be an image processing apparatus, for example, the image processing method may be executed by a terminal device or a server or other processing device, wherein the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the method for detecting brain nerve functional damage may be implemented by a processor calling computer readable instructions stored in a memory.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
Fig. 1 shows a flowchart of a method for detecting functional impairment of cranial nerves according to an embodiment of the present disclosure, which includes, as shown in fig. 1:
s10: acquiring a brain tissue area of a brain perfusion image;
in some possible embodiments, the type of brain perfusion image may be at least one of magnetic resonance Perfusion Weighted Imaging (PWI), computed tomography perfusion imaging (CTP), arterial spin labeling perfusion imaging (ASL-MRI). In addition, the brain perfusion images include a plurality of groups of brain images, each group of brain images can be used as a brain image scanned in a time range, and the plurality of groups of brain images in the brain perfusion images can be brain images scanned in continuous time (as a plurality of moments). In addition, the brain perfusion image in the embodiment of the present disclosure may be an image of a patient with a brain disease, such as a perfusion image of a patient with ischemic stroke, or an image of a patient with glioma, which is not particularly limited by the present disclosure.
S20: extracting dynamic image characteristics of the brain tissue area;
in some possible embodiments, the brain perfusion images are acquired at a plurality of time instants in a time series, so that the image features can be extracted separately for the brain perfusion images at each time instant and combined to form dynamic image features based on the time instant information.
S30: obtaining a first feature aiming at a first target and a second feature aiming at a second target based on the dynamic image feature, wherein the first target is whether cerebral apoplexy occurs, and the second target is an evaluation index of the degree of cerebral function damage;
in some possible embodiments, feature processing may be performed separately for a first target and a second target for detecting brain function impairment, resulting in corresponding first and second features. The first characteristic and the second characteristic can be used for accurately distinguishing whether the stroke is suffered or not and whether the brain function damage degree exists or not respectively.
S40: determining whether the cranial nerve function is impaired based on the first and second features.
In some possible embodiments, whether the brain function is damaged or not can be detected from two angles by using the first feature and the second feature respectively, so that the detection accuracy is improved.
Based on the configuration, the embodiment of the present disclosure may determine whether the cranial nerve function is damaged by extracting the dynamic image features of the cerebral perfusion image, extracting corresponding feature information according to different targets, and combining the two types of feature information. The dynamic image characteristics of the embodiment of the disclosure can accurately express blood flow information, and meanwhile, the condition of brain function damage is evaluated through multi-angle characteristic information, so that the detection precision is improved while automatic evaluation is realized.
The embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. First, embodiments of the present disclosure may acquire a cerebral perfusion image, wherein the manner of acquiring the cerebral perfusion image may include at least one of the following manners:
a1 Directly using medical image acquisition equipment to acquire cerebral perfusion images; in the embodiments of the present disclosure, the medical image acquisition device may be a nuclear magnetic resonance device, but is not limited to the specific limitations of the present disclosure.
A2 Transmit and receive brain perfusion images by electronic devices; the brain perfusion image transmitted by other electronic devices may be received in a communication manner, and the communication manner may include wired communication and/or wireless communication, which is not specifically limited in the present disclosure.
A3 Read the brain perfusion images stored in the database; the embodiment of the present disclosure may read the brain perfusion image stored locally or in the server according to the received data reading instruction, so as to obtain the brain perfusion image, which is not specifically limited by the present disclosure.
It should be noted that the brain perfusion images in the embodiments of the present disclosure may be perfusion images acquired by the same equipment or different equipment. Persons skilled in the relevant art can select corresponding equipment according to requirements, and the equipment is not particularly limited herein.
Further, the embodiments of the present disclosure may perform a bone removal process on the acquired brain perfusion image, determining a brain tissue region in the brain perfusion image. The disclosed embodiments are directed to analyzing features of a brain tissue region, and thus, a location of the brain tissue region of a brain perfusion image may be first extracted, and the brain tissue region in the disclosed embodiments may include gray matter, white matter, and cerebrospinal fluid. In which the FSL software may be used to perform a bone removal process on the brain perfusion image and obtain location information of the brain tissue region. The bone removing treatment can avoid the skull pixel in the brain perfusion image from performing subsequent treatment on the image, and improve the accuracy of feature extraction.
In the case of obtaining the brain tissue region, the brain perfusion image may be preprocessed, or the subsequent processing may be directly performed according to the determined brain tissue region in the brain perfusion image.
Wherein the preprocessing of the brain perfusion image comprises at least one of the following modes: performing time-series registration processing on the brain perfusion images; and performing smoothing treatment on the brain perfusion image in a time sequence.
Registering the brain perfusion images in a time series includes: and rigid registration is carried out on the brain images at multiple moments in the brain perfusion image, so that the motion deviation of the checked object caused by movement in the image acquisition process is eliminated.
The smoothing process on the time sequence of the brain perfusion image comprises the following steps: obtaining gray values of the same pixel points in brain tissues at a plurality of moments to form a time gray sequence; and executing smoothing processing on the time gray level sequence. The smoothing process may include three moving average processes, and the moving window 1*3 may be used to smooth the time grayscale sequence in the embodiment of the present disclosure, which is not specifically limited in the present disclosure. The noise in the brain perfusion image can be reduced through smoothing processing, and the image quality is improved.
After pre-treatment or after obtaining the brain tissue region, the brain tissue region may be further characterized. First, the dynamic image features of the brain perfusion image can be extracted. In the embodiment of the present disclosure, the brain perfusion image may include brain images at multiple time instants, and fig. 2 shows a flowchart of extracting dynamic image features of the brain tissue region according to the embodiment of the present disclosure. As shown in fig. 2, the extracting of the moving image feature of the brain tissue region includes:
s21: respectively executing feature extraction processing on the brain tissue area of the brain perfusion image at each moment to obtain moment image features at each moment;
s22: aiming at the cerebral perfusion image, respectively combining the image characteristics of the cerebral tissue area at each moment to obtain the dynamic image characteristics of the cerebral perfusion image;
in the embodiment of the present disclosure, the brain perfusion images may include t groups of brain images, where the t groups of brain images respectively correspond to t times, and as in the embodiment of the present disclosure, t may be an integer greater than 1 and less than or equal to 50, but is not limited to the specific limitation of the present disclosure. The embodiment of the disclosure can respectively perform feature extraction processing on the brain tissue areas in the brain image at the time t.
In one example, the feature extraction process may include: performing at least one image transformation on a brain tissue region, and obtaining an amplification set of the brain tissue region based on the brain tissue region and an image transformation result thereof; and extracting at least one of first order gradient features, shape features, and texture features of any image in the amplification set. Wherein the image transform comprises at least one of a Fourier transform, a Gabor transform, a Gauss-Laplace transform, a wavelet transform, a square root filter, and an exponential function filter. According to the embodiment of the disclosure, the original brain tissue region and the result after image transformation can be used for forming the amplification set, and feature extraction is performed on each brain tissue region in the amplification set so as to obtain richer image features. The extracted first-order gradient features may include features describing a single pixel or a single voxel, such as a gray mean, a maximum gray value, a minimum gray value, a variance, a percentile (14 and 15), and the like of the brain tissue region, skewness describing the shape of the data intensity distribution, kurtosis features, histogram quotient and energy information, and the like. Wherein the skewness reflects asymmetry of the data distribution curve to the left (negative bias, below the mean) or to the right (positive bias, above the mean); and the kurtosis reflects the tailing of the data distribution relative to the gaussian distribution due to outliers. Shape characteristics may include surface and volume based characteristics such as compactness and sphericity characteristics. The texture features may include an Absolute Gradient (Absolute Gradient), a gray level co-occurrence matrix (GLCM), a gray level run matrix (GLRLM), a gray level size region matrix (GLSZM), and a Gray Level Dependency Matrix (GLDM).
In some possible embodiments, the feature extraction process may be performed in an extracted imagery omics manner, so as to obtain temporal image features corresponding to the brain tissue regions at each time. By combining the time image features at each time, the dynamic image features of the brain tissue region are obtained. The embodiment of the present disclosure can calculate 65800 moving image features (3D brain images × 1316 image features at 50 moments). These dynamic image features are divided into 9 groups: (1) shape feature × 50=700, (2) first-order gradient feature: 18 × 50= 900), (3) the gray level co-occurrence matrix GLCM (24 × 50= 1200), (4) the gray level run length matrix GLRLM (16 × 50= 800), (5) the gray scale size area matrix GLSZM (16 × 50= 800), (6) the adjacent gray tone differing matrix NGTDM (5 × 50= 250), (7) the gray level correlation matrix GLDM (14 × 50= 700), (8) the laplace transform (465 × 50= 23250), (9) the wavelet transform (744 × 50= 37200). In the embodiment of the present disclosure, each time image feature may be defined as a combination of the name of the omics feature itself and the time value of the 3D brain image, where t is the corresponding time value of the 3D image. For example, "Log-sigma-1-0-mm-3d firstorder ski patent _skewness _17" indicates the time-of-day image feature "Log-sigma-1-0-mm-3d firstorder ski patent _17" at the 17 th time in DSC-PWI brain perfusion images.
In addition, when the feature extraction processing of the brain tissue region is executed, the time sequence can be optimized, the time value is reduced, and the operation efficiency is improved. Specifically, the time t of the brain perfusion image may be divided into three groups, such as a first group being a preparation phase, a second group being a reaction phase, and a third group being a recovery phase. The preparation stage is a stage in which the brain image is not affected by the contrast agent in the perfusion imaging process, the reaction stage is a stage in which the contrast agent flows through the blood vessel so that the gray value of the pixel point is changed, and the recovery stage is a process in which the gray value of the contrast agent leaving the pixel point is recovered to the initial state. In the embodiment of the disclosure, t is 50 moments, wherein the first group is 1-10 moments, the second group is 11-30 moments, and the third group is 31-50 moments. The foregoing is merely exemplary of the present disclosure and is not intended to be limiting. Under the condition of obtaining the three groups of moments, mean processing may be performed on the brain images corresponding to the first group of moments, and mean processing may be performed on the brain images corresponding to the third group of moments, and the brain images obtained by the first group of average processing, the brain images corresponding to the second group of moments, and the brain images obtained by the third group of mean processing are used as the new brain perfusion images for the feature extraction processing. Therefore, on the premise of ensuring comprehensive information, the calculation amount is reduced, and the feature extraction efficiency is improved.
The above-described embodiment can extract image features for a brain tissue region of a three-dimensional brain image at each time, and obtain moving image features at different times from the perspective of the three-dimensional image at multiple times.
In addition, in other embodiments of the present disclosure, the characteristics of each layer of brain image in the brain perfusion image at the time t may be analyzed as a whole, so as to obtain the dynamic image characteristics. Wherein the extracting the moving image feature of the brain tissue region based on the plurality of times may further include: generating first brain images based on the brain images of the same layer at different times, wherein the number of the first brain images is the same as that of the brain images, and the number of the first brain images is the same as that of the time; performing feature extraction processing on the brain tissue area in the first brain image to obtain layer image features; and for the cerebral perfusion image, obtaining dynamic image characteristics of the cerebral perfusion image based on the combination of the layer image characteristics of the cerebral tissue area.
The brain images in the brain perfusion image of the embodiment of the present disclosure are 3D images, and each brain image has the same dimension. Which may include multiple layers of brain images. The 3D image may include three-dimensional brain images, such as a coronal plane, a sagittal plane, and a transverse plane, and each layer image in each direction may be a feature extraction object of the embodiment of the present disclosure.
In one example, the dimensions of a brain perfusion image may be represented as t C W H, where t represents the number of moments, C represents the number of layers of the brain image, and W and H represent the width and height of the brain image, respectively. In the embodiment of the disclosure, the brain images at t moments are extracted according to the sequence from the first layer to the C layer, and a first brain image is formed. Each first brain image corresponds to the number of layers of the brain images, the number of the first brain images is the same as the number of layers of the brain images, and the number of the layers of the first brain images is the same as the number of moments. Thus, the first brain image obtained for each brain perfusion image has dimensions t × W × H and a number C.
In the case of obtaining the first brain image, the feature extraction process may be performed on the brain tissue region in the first brain image, obtaining the layer image feature. The feature extraction processing is the same as the configuration of the above embodiment, and includes: performing at least one image transformation on a brain tissue region, and obtaining an amplification set of the brain tissue region based on the brain tissue region and an image transformation result thereof; and extracting at least one of first order gradient features, shape features, and texture features of any image in the amplification set. The first brain image is obtained for each layer, corresponding layer image features can be obtained after feature extraction processing of the brain tissue region is executed, and the layer image features of each layer are combined to obtain dynamic image features. In the case where the number of layers of the brain image is 20, the number of moving image features may be 26320 (1316 × 20), but is not a specific limitation of the present disclosure.
Similarly, in the embodiments of the present disclosure, before extracting the layer image feature, the time value may be optimized, and the time value may be reduced.
Based on the registration, the embodiment of the disclosure can construct a three-dimensional image from a moment angle, extract layer image features, and further enrich the extracted dynamic image features. In other embodiments, the dynamic image features obtained in the embodiments of the present disclosure may be a combination of the above two types of dynamic image features, but are not limited to the specific limitations of the present disclosure.
Under the condition of obtaining rich dynamic image characteristics, the characteristic processing under two types of target states can be further executed. For example, a first feature corresponding to the first target and a second feature corresponding to the second target are obtained. The embodiment of the disclosure may first determine a first feature item of the first feature and a second feature item of the second feature, and then select the corresponding first feature and second feature from the dynamic image features.
In some embodiments, the first feature item and the second feature item may be selected according to a multi-level feature selection strategy. Fig. 3 shows a flow chart of a method for determining a feature item corresponding to a target according to an embodiment of the present disclosure. As shown in fig. 3, selecting a feature item corresponding to a target according to a multi-level feature selection policy includes:
s100: selecting a significant feature meeting significance from the dynamic features of the brain perfusion image set;
s200: screening out first image features which meet the selection conditions of the feature selection method from the remarkable features based on at least two feature selection methods;
s300: and selecting a second image feature meeting the classification condition from the first image features by using at least one classification model, wherein the feature name of the second image feature is a corresponding feature item.
In some possible embodiments, the feature item corresponding to the target may be determined by feature analysis of a brain perfusion image comprising a plurality of brain perfusion images. The manner of acquiring the cerebral perfusion images in the cerebral perfusion image set, the manner of determining the cerebral tissue area of the cerebral perfusion images, and the manner of calculating the dynamic characteristics are the same as those described in the above embodiments, and will not be described repeatedly. In the case of obtaining dynamic features of a brain perfusion image, a salient feature may be selected from the dynamic features.
Specifically, the normalization process may be performed on the obtained dynamic image features first to reduce the influence of the feature's own numerical span. Each line of the obtained dynamic image features represents the feature value of different feature items of the same patient, and each line represents the feature value of the same feature of different patients. In performing the feature normalization, the normalization process is performed for each column of features of the moving image, and the normalization process according to the embodiment of the disclosure may be a mean variance normalization, so that the normalized features have a mean of 0 and a variance of 1. In other embodiments, the normalized feature value may be a ratio of each column feature to a maximum value of the column feature. And then, the dynamic image features after the standardization processing can be used for carrying out the significant feature extraction.
The embodiment of the disclosure may select a significant feature from the first image features, perform significance analysis on each dynamic image feature in different states of the first target or the second target, calculate a p value (assumed value) between two sets of state features, and determine that the feature is a significant feature when the p value is smaller than a significant threshold. Where the significance threshold is 0.05, the p-value calculation method includes T-test, which is merely illustrative and not a specific limitation of the present disclosure. In addition, the embodiment of the disclosure may further calculate a correlation coefficient between the dynamic image features, and determine that the feature is a significant feature when the correlation coefficient of the feature is higher than a coefficient threshold and the p value is smaller than a significant threshold. Wherein the coefficient threshold may be a value greater than 0.6, such as 0.9.
In one example, in the case that the first target is whether suffering from stroke or not, the first target may include a first state representing suffering from stroke and a second state not suffering from stroke, the dynamic image features of the brain perfusion images corresponding to the first state and the dynamic image features of the brain perfusion images corresponding to the second state in the brain perfusion image set are grouped, a significance value p (assumed value) of the same feature item between the two groups of features is calculated, and in the case that the p value is smaller than a significance threshold value, the feature is determined to be a significant feature. The embodiment of the disclosure may further calculate a correlation coefficient between the two groups of features, and determine that a feature is a significant feature when the correlation coefficient of the feature is higher than a coefficient threshold and the p value is smaller than a significant threshold. Wherein the coefficient threshold may be a value greater than 0.6.
In another example, in the case that the second target indicates whether the brain nerve function is damaged, the second target may include a first state indicating that the brain nerve function is damaged (NIHSS score is greater than 0) and a second state indicating that the brain nerve function is not damaged (NIHSS score is equal to 0), the dynamic image features of the brain perfusion images corresponding to the first state and the dynamic image features of the brain perfusion images corresponding to the second state in the brain perfusion image set are grouped, a significance value p (hypothesis value) of the same feature item between the two groups of features is calculated, and in the case that the p value is less than a significance threshold value, the feature is determined to be a significant feature. The embodiment of the present disclosure may further calculate a correlation coefficient between the two sets of features, and determine that a feature is a significant feature when the correlation coefficient of the feature is higher than a coefficient threshold and the p value is smaller than a significant threshold. Wherein the coefficient threshold may be a value greater than 0.6.
In the case where a significant feature is obtained, feature selection may be performed using a plurality of feature selection methods, which differ in the selection principle. In one example, the feature selection method may include at least two of an information theory-based method, a similar feature-based method, a statistical feature-based method, and a sparse feature and stream feature-based method. The information theory based methods may include maximum Mutual Information Method (MIM), conditional mutual information maximization method (CMIM), conditional mutual information maximization method (MRMR), best Individual Features (BIF), mutual information selection (MIFS), joint Mutual Information (JMI), etc., and the similar feature based methods may include distance separability measure (Fisher score algorithm), laplacian score (Lap score algorithm), feature weight algorithm (ReliefF), the statistical feature based methods may include Tscore algorithm and fsscore algorithm, the sparse feature and flow feature based methods may include multi-cluster feature selection algorithm (MCFS), minimum absolute shrinkage selection operator (Lasso), alpha algorithm.
The embodiment of the present disclosure may adopt at least two of the above feature selection methods to perform feature selection on the salient features of the first target and/or the second target, where the selection conditions of the feature selection methods other than the Lass algorithm may include: the maximum feature number is less than a feature number threshold, and the feature score is greater than a score threshold, where the feature number threshold is greater than 10, set as 20 in this disclosure, and the score threshold may be greater than 0.6, set as 0.8 in this disclosure. The selection condition of the Lasso algorithm is to select a feature term with a non-zero feature coefficient. The foregoing is illustrative only and is not to be construed as a specific limitation of the present disclosure.
Each feature selection method based on the above configuration can correspondingly select a group of first image features from the salient features. And if the n characteristic selection methods correspond to the first image characteristics, generating n groups of first image characteristics.
Under the condition of obtaining the first image features, at least one classification model can be further utilized to select second image features meeting classification conditions from the first image features. The embodiment of the present disclosure may perform the above process in two ways.
In some possible embodiments, the present disclosure may combine the first image features obtained by the feature selection methods to obtain all the first image features, and perform classification of the first state and the second state in the first target and/or classification of the first state and the second state in the second target based on all the first image features by using at least one classification model, and determine the first image features meeting the classification condition under the first target classification and the second target classification as the second image features.
Specifically, the embodiment of the present disclosure may obtain the importance of each third image feature by using a classification model, and rank the second image features according to the importance. The process of obtaining the importance of each first image feature by using the classification model may include: and independently inputting each first image feature into a classification model, performing cross validation by using the classification model to obtain indexes of the classification model, wherein the indexes comprise at least two of AUC (area under ROC curve), precision, accuracy, real and F1, and the average value of each index is used as the importance of the feature. When a plurality of classification models are included, the importance levels corresponding to the classification models may be averaged to obtain the final importance level. When the importance of each first image feature is obtained, the first image features may be ranked from high to low according to the importance, wherein a preset number of first image features with the highest importance may be used as the second image features, or the first image features with the importance higher than the importance threshold may be used as the second image features. The preset number may be a value greater than 5, and the importance threshold may be a value greater than 0.6, but is not a specific limitation of the present disclosure.
In other possible implementations, the embodiments of the disclosure may perform feature selection on each set of first image features obtained by each feature selection method, and select a set of first image features that perform the best performance as the second image features. Specifically, classification of the first state and the second state may be performed based on respective sets of third image features using at least one classification model, and one or more sets of third image features satisfying a classification condition may be determined as the second image features. Wherein the score of each group of first image features may be calculated based on the performance of each group of third image features on the classification model; and determining the first image features meeting the classification conditions as second image features based on the scores. The set of first image features can be independently input into the classification model, ten-fold cross validation is performed by using the classification model to obtain indexes of the classification model, the indexes comprise at least two of AUC (area under ROC curve), precision, accuracy, real and F1, and the average value of each index is used as the score of the set of first image features. When a plurality of classification models are included, the scores corresponding to the classification models may be subjected to averaging processing to obtain a final score. When the scores of the groups of first image features are obtained, the groups of first image features may be ranked from high to low according to the scores, wherein the group of first image features with the highest score may be used as the second image features, or the first image features with the score higher than the score threshold may be used as the second image features. The score threshold may be a numerical value greater than 0.6, but is not a specific limitation of the present disclosure.
The classification model of the disclosed embodiments may include machine learning models based on different classification strategies, such as may include one or more of a support vector machine model (SVM), a decision tree model, a random forest model, an Adaboost model, a neural network model, a nearest neighbor model (KNN), a logistic regression model (LR), a linear discriminant analysis model (DA), a gradient boosting classification model (GBDT), and a gaussian naive bayes model (NB) based on non-linear relationships.
In addition, the embodiment of the disclosure may obtain the score of the feature selection method by using the score of the first image feature obtained by each feature selection method on each classification model, and optimize the feature selection method with the highest score. Specifically, the score of each group of first image features may be determined as the score of the feature selection method corresponding to the group of first image features, or may be determined as the score of the feature selection method corresponding to the group of first image features by using the average value of the importance degrees of the first image features in each group.
Based on the configuration, the embodiment of the present disclosure may use a multi-level feature selection strategy, and merge selection methods of different selection principles to screen out the second image feature that can highly distinguish the first state and the second state of the first target or the second target, thereby improving the feature selection accuracy.
The feature names of the second images are determined feature items. With the above configuration, the second image feature (first feature item) that identifies the first state and the second state of the first object, and the second image feature (second feature item) that identifies the first state and the second state of the second object can be obtained.
In other embodiments of the present disclosure, feature items may also be determined in combination with a feature selection method and a feature dimension reduction method. The embodiment of the disclosure can determine the optimal feature combination strategy of the feature selection method and the feature dimension reduction method, and obtain the corresponding first feature and second feature based on the optimal feature combination strategy.
Fig. 4 illustrates a flow diagram for determining an optimal feature combination strategy according to an embodiment of the present disclosure. Before obtaining a first feature for a first target and a second feature for a second target based on the dynamic image feature, determining an optimal feature combination strategy for extracting the first feature and the second feature, which includes:
s1000: performing feature processing on the dynamic image features of the cerebral perfusion image set based on a preset feature selection method and at least two feature dimension reduction methods to obtain a first selection feature and at least two groups of first dimension reduction features;
s2000: performing feature combination processing on the first selected feature and the first dimension reduction feature based on multiple combination strategies to obtain multiple combination features;
s3000: and evaluating the combined characteristics by using a classification model and the target variable, and determining the combined strategy corresponding to the combined characteristics with the highest classification score as the optimal characteristic combined strategy.
In some embodiments, the preset feature selection method may be preset specific information, or may be an optimal feature selection method determined by comparing a plurality of selection methods. Based on at least two feature selection methods, screening out a first image feature which meets the selection condition of the feature selection method from the significant features of the dynamic image features; and evaluating the first image characteristics obtained by each characteristic selection method by using the classification model, and determining the characteristic selection method corresponding to the first image characteristic with the highest classification score as the preset characteristic selection method. The process of determining the preset feature selection method in the embodiments of the present disclosure is as described in the above embodiments, and will not be described repeatedly herein.
In addition, for the feature dimension reduction method, the embodiments of the present disclosure may include various feature dimension reduction methods, including a linear dimension reduction method and a nonlinear dimension reduction method. Wherein the linear dimensionality reduction method comprises a principal component analysis algorithm (PCA), an Independent Component Analysis (ICA); the nonlinear dimensionality reduction method comprises a T-distribution neighborhood embedding algorithm (T-SNE), an equidistant feature mapping (ISOMAP) and a Unified Manifold Approximation and Projection (UMAP). And respectively carrying out dimensionality reduction treatment on the obtained significant features by using the dimensionality reduction mode to obtain corresponding first dimensionality reduction features. In the embodiment of the present disclosure, the feature number after dimension reduction is set to 10, and in other embodiments, the feature number may also be set to other values, which is not specifically limited by the present disclosure.
Under the condition of obtaining the first selected feature selected by the preset feature selection method and each obtained first dimension reduction feature by using the dimension reduction method, a plurality of feature combination strategies can be used to execute feature combination to obtain a corresponding combination feature, and the specific mode can be at least one of the following modes:
b1 In some possible embodiments, the first selected feature may be combined with each of the first dimension-reduced features, respectively, to obtain a plurality of combined features; for example, the first selected features may be represented as F-select, each first dimension-reducing feature may be represented as a set { F1, F2, …, fk }, where k represents the number of feature dimension-reducing algorithms. The resulting combinations are characterized as concat (F-select, F1), concat (F-select, F2), …, concat (F-select, fk), which represents the combination.
B2 Performing feature selection on the first dimension reduction feature by using the preset selection method to obtain a corresponding second dimension reduction feature, and combining the first selection feature and the second dimension reduction feature to obtain a plurality of combined features; the embodiment of the disclosure can further perform feature selection processing on the first dimension reduction features obtained by each dimension reduction algorithm to obtain second dimension reduction features matched with each target in the dimension reduction features. The feature selection operation may be performed by using a preset selection method, and a corresponding second dimension reduction feature is obtained. Wherein, the second dimension reduction feature set formed by each dimension reduction method can be expressed as { F } 1,F 2,…,F k, k represents the number of feature dimension reduction algorithms. The resulting combination is characterized as concat (F-select, F) 1),concat(F-select,F 2),…,concat(F-select,F k) And concat represents a combination. By the method, the characteristic information in the dimension reduction algorithm can be effectively extracted, the characteristic number is reduced, the operation precision is improved, and the operation speed is improved.
B3 Using the classification model to evaluate the classification score of the first dimension-reduction feature for the target, and obtaining a plurality of combined features based on the combination of the preset number of first dimension-reduction features with the highest score and the first selected feature. Similarly, the embodiment of the present disclosure may further evaluate the first selection features obtained by each dimension reduction algorithm by using the classification model. In one example, in the case that the target is a first target of whether stroke occurs, the classification module may be used to classify the state of the target based on the first dimension reduction feature, perform cross-validation using the classification model, and obtain an index of the classification model, where the index includes at least two of AUC (area under ROC curve), precision, accuracy, real, and F1, and use an average value of each index as a classification score of the set of first image features. When a plurality of classification models are included, the classification scores corresponding to the respective classification models may be subjected to averaging processing to obtain a final classification score. Under the condition of obtaining the classification score of each first dimension-reducing feature, each group of first dimension-reducing features may be ranked from high to low, where a preset number of first dimension-reducing features with the highest score may be combined with the first selection features to obtain a combined feature. The preset number may be 1 or a value greater than 1, and the disclosure does not specifically limit this. In addition, a first state (NIHSS score greater than 0) indicative of suffering from impaired nerve function and a second state (NIHSS score equal to 0) indicative of unimpaired nerve function may also be targeted.
Under the condition of obtaining combination features corresponding to different feature combination strategies, each combination feature can be evaluated by using a classification model, and the combination features of the embodiment of the disclosure can be expressed as concat1, concat2, …, concat m. Where m is the number of combined features. Similarly, ten-fold cross validation is performed on each combined feature by using the classification model to obtain indexes of the classification model, wherein the indexes comprise at least two of AUC (area under ROC curve), precision, accuracy, real and F1, and the average value of each index is used as the classification score of the group of first image features. When a plurality of classification models are included, the classification scores corresponding to the respective classification models may be subjected to averaging processing to obtain a final classification score. In the case of obtaining the classification score of each combination feature, each group of combination features may be ranked from high to low, where the combination feature with the highest classification score is determined as the optimal combination feature, and the corresponding feature combination policy is the optimal feature combination policy, for example, the optimal combination policy obtained in the embodiment of the present disclosure may be Lasso + PCA _ Lasso. The representation adopts Lasso as a preset feature selection method, the dimensionality reduction algorithm is a PCA algorithm, a first selection feature is obtained by utilizing the Lasso of the preset feature selection method, a first dimensionality reduction feature is obtained by the PCA algorithm, a second dimensionality reduction feature PCA _ Lasso is further selected from the first dimensionality reduction feature obtained by the PCA algorithm by utilizing the preset feature selection method, the first selection feature and the second dimensionality reduction feature are combined to obtain an optimal combination feature, and the corresponding Lasso + PCA _ Lasso algorithm is represented as an optimal combination strategy.
In some embodiments, the classification model of embodiments of the present disclosure may include machine learning models based on different classification strategies, such as may include one or more of a support vector machine model (SVM), a decision tree model, a random forest model, an Adaboost model, a neural network model, a nearest neighbor model (KNN), a logistic regression model (LR), a linear discriminant analysis model (DA), a gradient boosting classification model (GBDT), and a gaussian naive bayes model (NB) based on non-linear relationships.
Based on the configuration, the embodiment of the present disclosure processes the dynamic features extracted from the cerebral perfusion images by using a feature selection method and a dimension reduction method to obtain a first selection feature and a first dimension reduction feature, performs feature combination on the obtained first selection feature and the first dimension reduction feature by using different combination strategies, evaluates each combination feature by using a set classification model, selects a combination feature most matched with the state target, and further selects an optimal feature combination strategy corresponding to the state target. According to the embodiment of the method and the device, the optimal dimensionality reduction characteristic and the optimal characteristic selection combination strategy can be selected according to different state target adaptability, the brain function damage is researched, the analysis accuracy is improved, and better support is provided for clinical analysis.
Correspondingly, in the case of determining an optimal feature combination strategy, the dynamic features may be processed according to the feature combination strategy. The embodiment of the disclosure can directly select the corresponding first feature and second feature from the dynamic image features by using the feature item determined by the determined optimal feature selection method. Obtaining a first feature for a first target and a second feature for a second target based on the moving image feature, including: and selecting a first feature corresponding to the first feature item and a second feature corresponding to the second feature item from the dynamic image features. Alternatively, the determining the corresponding first feature and second feature by using an optimal feature combination strategy, and the obtaining the first feature for the first target and the second feature for the second target based on the moving image feature may include at least one of the following manners:
under the condition that the optimal feature combination strategy is the combination of a first selected feature and a first dimension reduction feature obtained by a preset feature selection method and a dimension reduction method, a first sub-feature corresponding to a first feature item and a second sub-feature corresponding to a second feature item can be directly selected from the dynamic image features; performing dimension reduction processing on the dynamic image features to obtain dimension reduction features; and obtaining the first feature based on the combination of the first sub-feature and the dimension reduction feature, and obtaining the second feature based on the combination of the second sub-feature and the dimension reduction feature.
Under the condition that the optimal feature combination strategy is to combine the feature selection with the corresponding selection feature after further utilizing a preset feature selection method on the basis of the dimension reduction feature, a first sub-feature corresponding to a first feature item and a second sub-feature corresponding to a second feature item can be selected from the dynamic image features; performing dimension reduction processing on the dynamic image features to obtain dimension reduction features; selecting a first sub-dimension reduction feature corresponding to the first target from the dimension reduction features by using a preset feature selection method, and selecting a second sub-dimension reduction feature corresponding to the second target from the dimension reduction features; and obtaining the first feature based on the combination of the first sub-feature and the first sub-dimension-reduction feature, and obtaining the second feature based on the combination of the second sub-feature and the second sub-dimension-reduction feature.
With the first and second features derived, detection of impairment of brain function may be performed based on the first and second features. Wherein said determining whether the cranial nerve function is impaired based on the first and second features comprises: and if the first characteristic and the second characteristic do not meet the first condition, obtaining a third characteristic by using the first characteristic and the second characteristic, and judging whether the cranial nerve function damage exists or not by using the third characteristic.
In some possible embodiments, the first condition may be that the first characteristic is indicative of a presence of a stroke lesion in the brain perfusion image and the second characteristic is indicative of a presence of a brain nerve function impairment in the brain perfusion image. The method and the device can utilize the classification model to perform state detection of the first target on the first characteristic to obtain probability values in two states, and perform state detection of the second target on the second characteristic to obtain probability values in two states; and determines the state with the high probability value as the state of the corresponding object. For example, for a first target, a first feature is determined to be indicative of having a stroke if the probability of a first state having a stroke is higher than the probability of a second state not having a stroke, and conversely, the absence of a stroke is indicated, and a second feature is determined to be indicative of having a neural function impairment if the probability of the first state having a neural function impairment is higher than the probability of the second state not having a neural function impairment, and conversely, the absence of a neural function impairment is indicated. When a plurality of classification models are adopted, the obtained probability value can be the mean value of the probability values obtained by the plurality of classification models.
And determining that the first condition is met under the condition that the first characteristic represents that the cerebral perfusion image has cerebral apoplexy and the second characteristic represents that the cerebral perfusion image has nerve function damage, and determining that the cerebral nerve function damage exists at the moment. Correspondingly, when the first characteristic represents that the cerebral perfusion image has no cerebral stroke, and the second characteristic represents that the cerebral perfusion image has no nerve function damage, the cerebral nerve function damage is determined to be absent.
In addition, in the case where the first feature and the second feature are inversely characterized, a third feature may be obtained using the first feature and the second feature, and whether the cranial nerve function is impaired may be determined based on the third feature. That is, when the first feature represents that the cerebral perfusion image has a stroke but the second feature represents that the cerebral perfusion image has no damage to the cranial nerve function, or the first feature represents that the cerebral perfusion image has no stroke but the second feature represents that the cerebral perfusion image has the damage to the cranial nerve function, the third feature can be obtained by using the first feature and the second feature, and whether the cerebral nerve function is damaged or not is determined based on the third feature.
In some possible embodiments, the obtaining a third feature by using the first feature and the second feature, and determining whether the cranial nerve function damage exists by using the third feature includes: acquiring a first probability corresponding to the first feature and the first target; acquiring a second probability corresponding to the second feature and the second target; obtaining the third feature based on the first probability and the second probability; and judging whether the cranial nerve function damage exists or not based on the third characteristic.
Specifically, the embodiment of the disclosure may determine, as a third feature, an average of a first probability that the first feature indicates that the patient has a stroke and a second probability that the second feature indicates that the cranial nerve injury exists, and determine that the cranial nerve function injury exists when the third feature is greater than the determination threshold. Otherwise, it is determined that there is no impairment of cranial nerve function.
Based on the above configuration, the embodiment of the present disclosure may extract corresponding feature information from the dynamic image features of the cerebral perfusion image according to different targets, and determine whether the cranial nerve function is damaged by combining the feature information of the different targets. The dynamic image characteristics of the embodiment of the disclosure can accurately express blood flow information, and meanwhile, the condition of brain function damage is evaluated through multi-angle characteristic information, so that automatic detection cannot be realized, and the detection precision can also be improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the present disclosure also provides a device for detecting functional impairment of cranial nerves, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any method for detecting functional impairment of cranial nerves provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding records in the methods section are not repeated.
Fig. 5 shows a block diagram of a brain nerve function impairment detection apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 5:
an obtaining module 10, configured to obtain a brain tissue region of a brain perfusion image;
an extracting module 20, configured to extract a dynamic image feature of the brain tissue region;
a feature processing module 30, configured to obtain a first feature for a first target and a second feature for a second target based on the dynamic image feature, where the first target is whether the stroke occurs, and the second target is an evaluation index of a degree of brain function impairment;
a determining module 40 for determining whether the cranial nerve function is impaired based on the first and second characteristics.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium, on which computer program instructions are stored, and when executed by a processor, the computer program instructions implement the above method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of another electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may further include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be interpreted as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or an electrical signal transmitted through an electrical wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method for detecting impairment of cranial nerve function, comprising:
acquiring a brain tissue area of a brain perfusion image;
extracting dynamic image characteristics of the brain tissue area;
obtaining a first characteristic aiming at a first target and a second characteristic aiming at a second target on the basis of the dynamic image characteristic, wherein the first target is whether cerebral stroke exists, and the second target is an evaluation index of the degree of the cerebral function damage;
determining whether the cranial nerve function is impaired based on the first and second features.
2. The method according to claim 1, wherein the extracting the dynamic image feature of the brain tissue region comprises:
respectively executing feature extraction processing on the brain tissue area of the brain perfusion image at each moment to obtain moment image features at each moment;
aiming at the cerebral perfusion image, respectively combining the image characteristics of the cerebral tissue area at each moment to obtain the dynamic image characteristics of the cerebral perfusion image;
and/or, before extracting the dynamic image feature of the brain tissue region, the method further comprises: and performing moving smoothing processing on each pixel point of the cerebral perfusion image in a time dimension.
3. The method according to claim 1 or 2, wherein the deriving a first feature for a first target and a second feature for a second target based on the moving image feature comprises:
selecting a first feature corresponding to a first feature item and a second feature corresponding to a second feature item from the dynamic image features;
and/or
Selecting a first sub-feature corresponding to a first feature item and a second sub-feature corresponding to a second feature item from the dynamic image features;
performing dimension reduction processing on the dynamic image features to obtain dimension reduction features;
and obtaining the first feature based on the first sub-feature and the dimension reduction feature, and obtaining the second feature based on the second sub-feature and the dimension reduction feature.
4. The method of any one of claims 1-3, wherein determining whether the cranial nerve function is impaired based on the first and second features comprises:
and if the first characteristic and the second characteristic do not meet the first condition, obtaining a third characteristic by using the first characteristic and the second characteristic, and judging whether the cranial nerve function damage exists or not by using the third characteristic.
5. The method of claim 4, wherein using the first and second features to obtain a third feature and using the third feature to determine whether the impairment of cranial nerve function exists comprises:
acquiring a first probability corresponding to the first feature and the first target;
acquiring a second probability corresponding to the second feature and the second target;
obtaining the third feature based on the first probability and the second probability;
determining whether the cranial nerve function impairment is present based on the third feature.
6. The method according to any one of claims 1-5, wherein before deriving the first feature for the first target and the second feature for the second target based on the moving image feature, determining an optimal feature combination strategy for extracting the first feature and the second feature further comprises:
acquiring dynamic image characteristics and target variables of a brain perfusion image in a brain perfusion image set, wherein the target variables comprise a first target and a second target;
performing feature processing on the dynamic image features based on a preset feature selection method and at least two feature dimension reduction methods to obtain first selection features and at least two groups of first dimension reduction features;
performing feature combination processing on the first selected feature and the first dimension reduction feature based on a plurality of combination strategies to obtain a plurality of combination features;
and evaluating the combined features by utilizing a classification model and the target variable, and determining a combined strategy corresponding to the combined feature with the highest classification score as the optimal feature combined strategy.
7. The method of claim 6, further comprising determining the preset feature selection method comprising:
based on at least two feature selection methods, screening out a first image feature which meets the selection condition of the feature selection method from the significant features of the dynamic image features;
and evaluating the first image characteristics obtained by each characteristic selection method by using the classification model, and determining the characteristic selection method corresponding to the first image characteristic with the highest classification score as the preset characteristic selection method.
8. A cranial nerve function damage detection device, comprising:
the acquisition module is used for acquiring a brain tissue area of the brain perfusion image;
the extraction module is used for extracting the dynamic image characteristics of the brain tissue area;
the characteristic processing module is used for obtaining a first characteristic aiming at a first target and a second characteristic aiming at a second target on the basis of the dynamic image characteristic, wherein the first target is whether the cerebral apoplexy occurs, and the second target is an evaluation index of the degree of brain function damage;
a determination module for determining whether the cranial nerve function is impaired based on the first and second features.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1-7.
10. A computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1-7.
CN202210858288.0A 2022-07-20 2022-07-20 Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium Pending CN115393274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210858288.0A CN115393274A (en) 2022-07-20 2022-07-20 Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210858288.0A CN115393274A (en) 2022-07-20 2022-07-20 Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115393274A true CN115393274A (en) 2022-11-25

Family

ID=84117167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210858288.0A Pending CN115393274A (en) 2022-07-20 2022-07-20 Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115393274A (en)

Similar Documents

Publication Publication Date Title
Yap et al. Deep learning in diabetic foot ulcers detection: A comprehensive evaluation
Billah et al. An automatic gastrointestinal polyp detection system in video endoscopy using fusion of color wavelet and convolutional neural network features
US9996925B2 (en) System and method for assessing wound
US10032287B2 (en) System and method for assessing wound
CN112767329B (en) Image processing method and device and electronic equipment
Kou et al. Microaneurysms segmentation with a U-Net based on recurrent residual convolutional neural network
WO2022036972A1 (en) Image segmentation method and apparatus, and electronic device and storage medium
Jaworek-Korjakowska et al. Eskin: study on the smartphone application for early detection of malignant melanoma
CN113222038B (en) Breast lesion classification and positioning method and device based on nuclear magnetic image
CN113349810B (en) Cerebral hemorrhage focus identification and hematoma expansion prediction system and device
WO2021259390A2 (en) Coronary artery calcified plaque detection method and apparatus
Veredas et al. Efficient detection of wound-bed and peripheral skin with statistical colour models
CN113469981B (en) Image processing method, device and storage medium
CN112927239A (en) Image processing method, image processing device, electronic equipment and storage medium
CN116579954B (en) Intelligent enhancing method for ultra-high definition endoscope image
CN113012146A (en) Blood vessel information acquisition method and device, electronic equipment and storage medium
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
CN115410028A (en) Feature combination strategy selection method, feature combination strategy selection device, feature combination strategy state detection method, feature combination strategy selection device, state detection device and electronic equipment
CN115115826A (en) Feature selection and extraction method and device, anomaly detection model and construction method thereof
CN115132359A (en) Prediction model construction method and device, prediction method and device, and electronic device
CN115565666A (en) Cerebral infarction assessment method and device, electronic equipment and storage medium
CN115393274A (en) Method and device for detecting functional damage of cranial nerves, electronic equipment and storage medium
CN115761371A (en) Medical image classification method and device, storage medium and electronic equipment
Goyal et al. Automatic lesion boundary segmentation in dermoscopic images with ensemble deep learning methods
CN112633348B (en) Method and device for detecting cerebral arteriovenous malformation and judging dispersion property of cerebral arteriovenous malformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination