CN111899253A - Method and device for judging and analyzing abnormity of fetal craniocerebral section image - Google Patents

Method and device for judging and analyzing abnormity of fetal craniocerebral section image Download PDF

Info

Publication number
CN111899253A
CN111899253A CN202010788465.3A CN202010788465A CN111899253A CN 111899253 A CN111899253 A CN 111899253A CN 202010788465 A CN202010788465 A CN 202010788465A CN 111899253 A CN111899253 A CN 111899253A
Authority
CN
China
Prior art keywords
craniocerebral
image
section
preprocessed
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010788465.3A
Other languages
Chinese (zh)
Other versions
CN111899253B (en
Inventor
李胜利
李肯立
赵蕾
谭光华
朱宁波
马来发
文华轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lanxiang Zhiying Technology Co ltd
Original Assignee
Changsha Datang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Datang Information Technology Co ltd filed Critical Changsha Datang Information Technology Co ltd
Priority to CN202010788465.3A priority Critical patent/CN111899253B/en
Publication of CN111899253A publication Critical patent/CN111899253A/en
Application granted granted Critical
Publication of CN111899253B publication Critical patent/CN111899253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Image Analysis (AREA)
  • Pathology (AREA)
  • Pregnancy & Childbirth (AREA)
  • Multimedia (AREA)
  • Gynecology & Obstetrics (AREA)
  • Computational Linguistics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Public Health (AREA)
  • Evolutionary Biology (AREA)
  • Animal Behavior & Ethology (AREA)

Abstract

The application relates to a method and a device for judging and analyzing abnormality of a fetal craniocerebral section image, wherein the method comprises the following steps: acquiring a craniocerebral section image data set; preprocessing the craniocerebral section image data set to obtain each preprocessed section image; inputting each preprocessed tangent plane image into an automatic encoder determined through training to obtain the image characteristics of the preprocessed tangent plane image; and inputting the image characteristics of the preprocessed section image into a classifier determined through training to obtain a qualitative classification result of the craniocerebral section image data set. According to the method, the determined classifier is trained to classify the craniocerebral section image data set to obtain the qualitative classification result of the craniocerebral section image data set, so that the labor cost and the time cost can be effectively reduced while the unified automatic qualitative analysis is realized, and the accuracy and the repeatability of the analysis result are obviously improved.

Description

Method and device for judging and analyzing abnormity of fetal craniocerebral section image
Technical Field
The application relates to the technical field of image processing, in particular to a method and a device for judging and analyzing abnormality of a fetal craniocerebral section image, computer equipment and a storage medium.
Background
Fetal ultrasound is the first examination method for prenatal diagnosis and defect child screening. The existing method for evaluating the brain structural abnormality of the ultrasonic sectional image of the fetus mainly adopts the mode of subjective evaluation and objective measurement of the ultrasonic sectional image of the fetus by an antenatal sonographer, and specifically carries out qualitative analysis by judging whether the brain structural abnormality exists in the ultrasonic sectional image of the fetus.
However, the method of performing qualitative analysis on the fetal craniocerebral sectional image by a doctor generally has the following problems: without a unified evaluation standard, different doctors have different cognition on whether the lateral fissure of the ultrasonic sectional image of the fetus is abnormal, which can cause inconsistency of analysis results.
Disclosure of Invention
Therefore, it is necessary to provide a method and an apparatus for analyzing a fetal craniocerebral section image in order to solve the above technical problems.
A method of craniocerebral section image analysis, the method comprising:
acquiring a craniocerebral section image data set;
preprocessing the craniocerebral section image data set to obtain each preprocessed section image;
inputting each preprocessed tangent plane image into an automatic encoder determined through training to obtain the image characteristics of each preprocessed tangent plane image;
and inputting the image characteristics of each preprocessed section image into a classifier determined through training to obtain a qualitative classification result of the craniocerebral section image data set.
In one embodiment, after the preprocessing the craniocerebral section image data set to obtain each preprocessed section image, the method further includes:
segmenting the preprocessed tangent plane image based on a segmentation network determined through training to obtain segmentation results corresponding to all craniocerebral structures in the preprocessed tangent plane image;
and determining the quantitative analysis result of each craniocerebral structure in the craniocerebral section image data set based on the corresponding segmentation result of each craniocerebral structure.
In one embodiment, the determining a result of quantitative analysis of each structure in the craniocerebral section image data set based on the segmentation result corresponding to each structure in the craniocerebral section image data set includes:
respectively carrying out corresponding structural outer contour polygon fitting according to the segmentation result corresponding to each internal craniocerebral structure to obtain an outer contour polygon fitting result corresponding to each internal craniocerebral structure;
and determining the quantitative analysis result of each intracranial structure based on the fitting result of the outer contour polygon.
In one embodiment, the quantitative analysis results comprise perimeter analysis results and area analysis results;
determining perimeter analysis results and area analysis results of the internal craniocerebral structures based on the outer contour polygon fitting results, wherein the perimeter analysis results and the area analysis results comprise the following steps:
respectively reading the number of contour points in the fitting result of the outer contour polygon corresponding to each intracranial structure to obtain the perimeter analysis result of each corresponding intracranial structure;
respectively obtaining the number of pixel points surrounded by the outer contour in the fitting result of the outer contour polygon corresponding to each internal craniocerebral structure to obtain the area analysis result of each corresponding internal craniocerebral structure;
in one embodiment, the quantitative analysis result comprises an angle analysis result;
determining an angle analysis result of each intracranial structure based on the outer contour polygon fitting result, including:
obtaining coordinates of each vertex in the fitting result of the outline polygon, and respectively calculating an included angle of a vector formed by two adjacent vertices based on the coordinates of the vertices to obtain an angle analysis result;
in one embodiment, the quantitative analysis result comprises a distance analysis result;
determining a distance analysis result of each intracranial structure based on the outer contour polygon fitting result, including:
and acquiring coordinates of each vertex in the fitting result of the outer contour polygon, and respectively calculating the distance between two adjacent vertices based on the coordinates of the vertices to obtain a distance analysis result.
In one embodiment, the determination of the autoencoder comprises the steps of:
acquiring a craniocerebral section sample image, wherein the craniocerebral section sample image comprises a negative sample image and a positive sample image, and the craniocerebral section sample image carries qualitative classification result labels for internal structures of the craniocerebral;
and training a preset automatic encoder based on the craniocerebral section sample image to obtain the automatic encoder.
In one embodiment, the classifier is determined by the method comprising the steps of:
inputting the sample image of the craniocerebral section into the automatic encoder to obtain the sample characteristics of the sample image of the craniocerebral section;
and training a preset classifier based on the sample characteristics to obtain the classifier.
In one embodiment, the determining manner of the split network includes:
acquiring a craniocerebral section sample image, wherein the craniocerebral section sample image carries quantitative analysis result labels for internal structures of the craniocerebral;
preprocessing the craniocerebral section sample image to obtain a corresponding preprocessed section sample image;
and training a preset segmentation network based on the preprocessed tangent plane sample image to obtain the segmentation network.
A craniocerebral sectional image analysis device, the device comprising:
the acquisition module is used for acquiring a craniocerebral section image dataset;
the preprocessing module is used for preprocessing the craniocerebral section image data set to obtain each preprocessed section image;
the feature extraction module is used for inputting each preprocessed tangent plane image into an automatic encoder determined through training to obtain the image features of each preprocessed tangent plane image;
and the classification module is used for inputting the image characteristics of each preprocessed section image into a classifier determined through training to obtain a qualitative classification result of the craniocerebral section image data set.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the above-described craniocerebral sectional image analysis method when the processor executes the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned craniocerebral section image analysis method.
According to the method and the device for analyzing the fetal craniocerebral section image, the obtained craniocerebral section image dataset is preprocessed, the preprocessed image is input into an automatic encoder determined by training to obtain the image characteristics of each preprocessed section image, and then the image characteristics of the preprocessed section image are input into a classifier determined by training to obtain the classification result of the craniocerebral section image dataset. According to the method, the characteristics of the craniocerebral section image data set are extracted through the classifier determined by training and qualitative classification is carried out on the basis of the image characteristics, so that the qualitative classification result of the craniocerebral section image data set is obtained, the labor cost and the time cost can be effectively reduced while the unified and standard automatic qualitative analysis is realized, and the accuracy and the repeatability of the analysis result are obviously improved.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for analyzing a craniocerebral sectional image according to an embodiment;
FIG. 2 is a schematic flow chart of a method for analyzing a craniocerebral sectional image according to another embodiment;
FIG. 3 is a schematic flow chart illustrating the determination of the quantitative analysis of each of the intracranial structures in the craniocerebral slice image dataset based on the segmentation corresponding to each of the intracranial structures in one embodiment;
FIG. 4 is a schematic flow chart of a method for analyzing a craniocerebral section image according to an exemplary embodiment;
FIG. 5(1) is a schematic representation of a cranial vertex cross-sectional image in one embodiment;
FIG. 5(2) is a schematic diagram showing the measurement of the perimeter and area of the brain parenchyma (the area surrounded by the curved edge) in the cranial transverse section image according to one embodiment;
FIG. 5(3) is a schematic diagram illustrating the measurement of the angle between the occipital sulcus and occipital sulcus in a cranial transverse plane image according to an embodiment;
FIG. 5(4) is a graph showing the measurement of the maximum occipital groove depth in the cranial vertex cross-section image (shown by white dotted line) in one embodiment
FIG. 6(1) is a schematic diagram of a horizontal cross-sectional view of a transparent cell in one embodiment;
FIG. 6(2) is a schematic diagram showing the measurement of the perimeter and area of the parenchyma of the brain (the area surrounded by the curved edges) in the horizontal cross-sectional image of the transparent cell in one embodiment;
FIG. 6(3) is a schematic diagram illustrating the measurement of the perimeter and area of a transparent cell (the area surrounded by the curved edge) in a horizontal cross-sectional image of the transparent cell in an embodiment;
FIG. 6(4) is a schematic diagram illustrating a measurement of the width of the midpoint of the transparent cell in the horizontal cross-sectional image of the transparent cell (shown by the vertical dashed line) in an embodiment;
FIG. 6(5) is a schematic diagram illustrating the measurement of the maximum length of the transparent cell in the horizontal cross-sectional image of the transparent cell (shown by the horizontal dashed line) in one embodiment;
FIG. 6(6) is a schematic diagram illustrating the measurement of the angle between the pincushion groove back in the horizontal cross-section image of the transparent cell in one embodiment;
FIG. 6(7) is a schematic view of the measurement of the maximum depth of the occipital groove in the horizontal cross-sectional image of the transparent cell (shown in dashed lines) in one embodiment;
FIG. 7(1) is a diagram illustrating a horizontal sectional view of the thalamus in an embodiment;
FIG. 7(2) is a schematic diagram illustrating the measurement of the perimeter and area of the brain parenchyma (the area surrounded by the curved edge) in the horizontal sectional image of the thalamus in one embodiment;
FIG. 7(3) is a diagram illustrating the measurement of the circumference and area of the thalamus (curved edge bounding region) in the horizontal cross sectional image of the thalamus in one embodiment;
FIG. 7(4) is a schematic diagram showing the measurement of the perimeter and area of the transparent compartment (the area surrounded by the curved edge) in the horizontal cross-sectional image of the thalamus in one embodiment;
FIG. 7(5) is a schematic representation of a measurement of the maximum transverse diameter of the transparent compartment in a horizontal cross-sectional image of the thalamus (shown in vertical dashed line) in accordance with one embodiment;
FIG. 7(6) is a schematic diagram showing the measurement of the lateral cleavage line in the thalamic horizontal cross-sectional image (shown in the figure as L1, L2, L3, L4, L5) in one embodiment;
FIG. 7(7) is a schematic diagram illustrating the measurement of lateral fissure angle in the horizontal sectional image of thalamus in one embodiment;
FIG. 7(8) is a schematic diagram showing the measurement of the perimeter and area of the island (the area surrounded by the curved edge) in the horizontal cross-sectional image of the thalamus in one embodiment;
FIG. 8(1) is a schematic representation of a lateral ventricle horizontal cross-sectional image in one embodiment;
FIG. 8(2) is a schematic diagram illustrating the measurement of the perimeter and area of the brain parenchyma (the area surrounded by the curved edge) in the lateral ventricle horizontal cross-sectional image in one embodiment;
FIG. 8(3) is a schematic diagram showing the measurement of the circumference and area of the thalamus (curved edge bounding region) in the lateral ventricles horizontal cross-sectional image in one embodiment;
FIG. 8(4) is a schematic diagram showing the measurement of the perimeter and area of the transparent compartment (the area surrounded by the curved edge) in the lateral ventricle horizontal cross-sectional image in one embodiment;
FIG. 8(5) is a schematic diagram illustrating the measurement of the maximum length of the transparent compartment in the lateral ventricles horizontal cross-sectional image (shown in dashed white lines) in one embodiment;
FIG. 8(6) is a schematic diagram showing the measurement of the circumference and area of the posterior horn of the lateral ventricle (the area surrounded by the curved edge) in the horizontal cross-sectional image of the lateral ventricle in one embodiment;
FIG. 8(7) is a schematic diagram illustrating the measurement of the posterior horn width of the lateral ventricle in the lateral ventricle horizontal cross-sectional image (shown by the white dotted line) in one embodiment;
FIG. 8(8) is a schematic diagram illustrating the measurement of the angle of the posterior horn of the lateral ventricle in a horizontal cross-sectional image of the lateral ventricle in one embodiment;
FIG. 8(9) is a schematic diagram showing the measurement of the lateral fissure line in the lateral ventricle horizontal cross-sectional image in one embodiment (L1, L2, L3, L4, L5);
FIG. 8(10) is a schematic diagram illustrating the measurement of lateral fissure angle in a lateral ventricle horizontal cross-sectional image in one embodiment;
FIG. 8(11) is a schematic diagram showing the measurement of the perimeter and area of the cerebral island (the area enclosed by the dashed box) in the lateral ventricle horizontal cross-sectional image in one embodiment;
FIG. 9(1) is a diagram of a horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 9(2) is a schematic diagram illustrating the measurement of the perimeter and area of the brain parenchyma (the area surrounded by the curved edge) in the horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 9(3) is a schematic diagram showing the measurement of the circumference and area of the thalamus (region surrounded by the curved edge) in the horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 9(4) is a schematic diagram showing the measurement of the perimeter and area of the transparent compartment (the area surrounded by the curved edge) in the horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 9(5) is a schematic diagram illustrating the measurement of the maximum length of the transparent cells in the horizontal cross-sectional image of the cerebellum (shown by the white dashed line) in one embodiment;
FIG. 9(6) is a diagram illustrating the measurement of the lateral fissure line in the horizontal cerebellar transection image in one embodiment (L1, L2, L3, L4, L5);
FIG. 9(7) is a schematic diagram illustrating the measurement of lateral fissure angle in the horizontal cross-sectional image of cerebellum in one embodiment;
FIG. 9(8) is a schematic diagram showing the measurement of the circumference and area of the cerebellum (the area surrounded by the curve edge) in the horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 9(9) is a schematic diagram illustrating the measurement of the maximum cerebellum transverse diameter (shown by the white dotted line) in the horizontal sectional image of the cerebellum in one embodiment;
FIG. 9(10) is a schematic diagram illustrating the measurement of the perimeter and area of the lumbricus (curved edge surrounding area) in the horizontal cross-sectional image of cerebellum in one embodiment;
FIG. 9(11) is a schematic diagram showing the measurement of the perimeter and area of the posterior fossa cranii (region surrounded by the curved edge) in the horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 9(12) is a schematic representation of the measurement of the maximum anteroposterior diameter of the posterior fossa ventriculi in a horizontal cross-sectional image of the cerebellum in one embodiment (shown in dashed white lines);
FIG. 9(13) is a schematic diagram showing the measurement of the perimeter and area of the brain island (the area surrounded by the curved edge) in the horizontal cross-sectional image of the cerebellum in one embodiment;
FIG. 10 is a block diagram showing the structure of a craniocerebral section image analysis apparatus according to an embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In an embodiment, as shown in fig. 1, a method for analyzing a craniocerebral section image is provided, and this embodiment is illustrated by applying the method to a terminal, it is to be understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes steps S110 to S140.
And step S110, acquiring a craniocerebral section image data set.
The section image is an ultrasonic section image, and the craniocerebral section image data set comprises more than two craniocerebral section images; in one embodiment, the craniocerebral slice image dataset may be input by a user, acquired directly from a connected ultrasound imaging device, or may be acquired from a database storing ultrasound images. In one embodiment, the craniocerebral slice image dataset comprises 5 craniocerebral slice image data, a cranial apical cross-section image, a transparent compartment horizontal cross-section image, a thalamus horizontal cross-section image, a lateral ventricle horizontal cross-section image, and a cerebellar horizontal cross-section image.
And step S120, preprocessing the craniocerebral section image data set to obtain each preprocessed section image.
The main purposes of image preprocessing are to eliminate irrelevant information from the image, recover useful real information, enhance the detectability of relevant information and simplify the data to the maximum extent, thereby improving the reliability of feature extraction, image segmentation, matching and recognition. In one embodiment, pre-processing the craniocerebral slice image dataset to obtain pre-processed slice images comprises: and respectively preprocessing each craniocerebral section image to obtain each corresponding preprocessed section image. Further, the preprocessing is respectively performed on each craniocerebral section image to obtain each corresponding preprocessed section image, and the preprocessing comprises the following steps: and respectively cutting each craniocerebral section image, and reserving an effective image area with a preset size to obtain each preprocessed section image.
Step S130, inputting each preprocessed sectional image into an automatic encoder determined through training to obtain the image characteristics of each preprocessed sectional image.
An auto-encoder (autoencoder) is a type of neural network, and is generally used for dimension reduction or feature learning. In this embodiment, the automatic encoder is determined by training in advance, and the automatic encoder can be used for performing image feature extraction on the preprocessed section image; inputting the preprocessed tangent plane image into a trained automatic encoder, and obtaining the image characteristics output by the automatic encoder, namely the image characteristics of the preprocessed tangent plane image.
And step S140, inputting the characteristics of each preprocessed section image into a classifier determined through training to obtain a qualitative classification result of the craniocerebral section image data set.
The concept of classification is to learn a classification function or construct a classification model, namely a Classifier (Classifier), on the basis of the existing data; the function or model can map data records in the database to one of a given class and thus can be applied to data prediction. In the embodiment, the classifier is determined by training in advance, and the classifier can perform qualitative analysis on the structures in the cranium and the brain based on the image characteristics of the preprocessed sectional image; and inputting the image features extracted by the automatic encoder into a classifier determined by training to obtain a qualitative classification result of the craniocerebral section image data set.
Further, in one embodiment, the classifier is trained to perform qualitative analysis on a specific craniocerebral structure in the craniocerebral to obtain a qualitative classification result of the craniocerebral section image data set, and in this embodiment, the obtained qualitative classification result of the craniocerebral section image data set includes a qualitative classification result of the specific craniocerebral structure in the craniocerebral section image data set. The specific intracranial structure may be one or more intracranial structures contained in the craniocerebral section image dataset, and the specific intracranial structure may be specified according to actual conditions. In a specific embodiment, the specific intracranial structure refers to lateral fissure and cerebral island structure in the cranium, and the training sample adopted in the training process of the classifier in the embodiment carries the label of the qualitative analysis result of the specific craniocerebral structure; in other embodiments, a particular intracranial structure may also represent other intracranial structures.
In a specific embodiment, the classifier for qualitatively classifying the image features of the preprocessed sectional image is a Support Vector Machine (SVM) classifier; the basic model of the two-classification SVM classifier is defined as a linear classifier with the maximum interval on the feature space, and the learning strategy is interval maximization and can be finally converted into the solution of a convex quadratic programming problem.
In one embodiment, qualitatively classifying the image features of each preprocessed slice image based on the classifier determined by training includes: and determining whether abnormal conditions exist in specific intracranial structures in the craniocerebral section image dataset or not through a classifier based on the image characteristics. The qualitative analysis of the specific brain internal structure actually judges the abnormality of the specific brain internal structure, and if the specific brain internal structure is normally developed, the qualitative classification result output by the classifier is normally developed; and if the specific craniocerebral structure is abnormal in development, the qualitative classification result output by the classifier is abnormal in development.
In this embodiment, after the image features of each preprocessed slice are extracted and obtained, the image features are input into a classifier, and the classifier performs qualitative analysis on the image features to obtain a qualitative analysis result of a specific intracranial structure.
According to the method for analyzing the craniocerebral section image, the obtained craniocerebral section image data set is preprocessed, the preprocessed image is input into an automatic encoder determined by training to obtain the image characteristics of each preprocessed section, and the image characteristics of each preprocessed section are input into a classifier determined by training to obtain the classification result of the craniocerebral section image data set. According to the method, the determined classifier is trained to classify the craniocerebral section image data set to obtain the qualitative classification result of the craniocerebral section image data set, so that the labor cost and the time cost can be effectively reduced while the unified automatic qualitative analysis is realized, and the accuracy and the repeatability of the analysis result are obviously improved.
In one embodiment, as shown in fig. 2, after the pre-processing is performed on the craniocerebral slice image data set to obtain each pre-processed slice image, the method further includes step S210 and step S220.
Step S210, segmenting the preprocessed sectional image based on the segmentation network determined through training to obtain segmentation results corresponding to the structures in the cranium and the brain in the preprocessed sectional image.
Image segmentation is a technique and process that divides an image into several specific regions with unique properties and proposes an object of interest. In this embodiment, the segmentation network is trained in advance, and the segmentation network can segment each intracranial structure contained in the preprocessed sectional image; and inputting each preprocessed section image into the segmentation network determined by training respectively to obtain the segmentation result corresponding to each craniocerebral structure in each preprocessed section image. In one embodiment, the segmentation result corresponding to each structure in the brain comprises the structure position contour point coordinates of each structure in the brain. In one particular embodiment, the intracranial structure comprises: brain parenchyma, parietal occipital sulcus, hyaline septal space, thalamus, lateral fissure, cerebral islets, and lateral ventricles.
In one embodiment, a deep learning segmentation network model is used to segment the structure in the cranium for each preprocessed sectional image. In one embodiment, the partitioning network adopts a U-Net model, and a network structure of the U-Net model sequentially and logically sets an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a fourth convolution layer, a third pooling layer, a fifth convolution layer, a sixth convolution layer, a seventh convolution layer, an eighth convolution layer and a transposed convolution layer, and the network structure is as follows:
the input layer in the first convolution layer is a matrix of 512 × 3 pixels, the convolution kernel number is 32 and the step size is 2 after convolution with the size of 3 × 3, the layer is filled by using an SAME mode, and the output matrix size is 512 × 32;
the input 512 x 32 pixel matrix in the first pooling layer, through the pooling window with size 3 x 3, length and width steps are both 2, the output matrix of the layer is 256 x 32;
inputting a matrix of 256 × 32 pixels in the second convolution layer, passing through convolution kernels with the size of 3 × 3, wherein the number of the convolution kernels is 64, the step size is 1, and the matrix with the size of 256 × 64 is output;
the second pooling layer was fed with a matrix of 256 x 64 pixels, passing through a pooling window of size 3 x 3, with length and width steps of 2, and the output of the layer was 128 x 64;
inputting a matrix of 128 × 64 pixels into the third convolution layer, passing through convolution kernels with the size of 3 × 3, wherein the number of the convolution kernels is 128, the step size is 1, the layer is filled by using an SAME mode, and the matrix with the size of 128 × 128 is output;
inputting a matrix of 128 × 128 pixels into the fourth convolution layer, passing through convolution kernels with the size of 3 × 3, wherein the number of the convolution kernels is 128, the step size is 1, the layer is filled by using an SAME mode, and the matrix with the size of 128 × 128 is output;
inputting a matrix of 128 x 128 pixels in the fourth pooling layer, passing through a pooling window with the size of 3 x 3, wherein the length and the width of the pooling window are both 2, and outputting a matrix with the size of 64 x 128;
inputting a 64 × 128 pixel matrix into the fifth convolution layer, passing through convolution kernels with the size of 3 × 3, wherein the number of the convolution kernels is 256, the step size is 2, and the output matrix of the layer is 64 × 256;
inputting a 64 × 256 pixel matrix in the sixth convolution layer, passing through convolution kernels with the size of 3 × 3, wherein the number of the convolution kernels is 256, the step size is 2, and the output matrix of the layer is 64 × 256;
inputting a matrix of 64 × 256 pixels in the seventh convolution layer, adding a batch normalization layer and a RELU activation layer after convolution, wherein the size of the output matrix is 64 × 512, and the size of the output matrix is 64 × 512;
inputting a matrix of 64 × 512 pixels into the eighth convolution layer, and adding output matrixes of the BN layer and the RELU activation layer after convolution, wherein the size of the output matrix is 64 × 512, and the size of the output matrix is 64 × 512;
finally, the input matrix size is 64 × 512, the transposed convolution layer is passed through, and the output matrix size is 128 × 256.
In other embodiments, other segmentation networks may be used to achieve the segmentation of each of the intracranial structures in the preprocessed slice image.
Step S220, determining the quantitative analysis result of each craniocerebral structure in the craniocerebral section image data set based on the segmentation result corresponding to each craniocerebral structure.
In one embodiment, the results of the quantitative analysis of each intracranial structure include: at least one of an area analysis result, a circumference analysis result, a width analysis result, and an angle analysis result of each of the craniocerebral structures.
In another embodiment, the quantitative analysis result to be determined for the cranial transverse section image in the craniocerebral section image dataset comprises: measurement of brain parenchyma: the area and perimeter of the brain parenchyma; and (3) occipital groove measurement: the back included angle of the parietal occipital sulcus, i.e., the maximum depth of the vertex of the parietal occipital sulcus perpendicular to the midline of the brain.
In another embodiment, the quantitative analysis result to be determined for the transparent compartment horizontal cross-sectional image in the craniocerebral sectional image dataset comprises: measurement of brain parenchyma: the area and perimeter of the brain parenchyma; transparent cell measurement: area and perimeter of the transparent compartment, width of midpoint and maximum major diameter of the transparent compartment, occipital groove measurement: the maximum depth of the vertex of the occipital sulcus perpendicular to the midline of the brain and the included angle between the occipital sulcus and the occipital sulcus.
In another embodiment, the quantitative analysis result to be determined for the thalamic horizontal cross-sectional image in the craniocerebral sectional image dataset comprises: measuring the brain parenchyma, namely the perimeter and the area of the brain parenchyma; thalamic measurements: the area and circumference of the thalamus; measurement of transparent cell: the area and perimeter of the transparent separate cavity, and the maximum transverse diameter of the transparent separate cavity; measurement of lateral fissure: the area and the perimeter of the lateral fissure, the connecting line distance of two vertexes above the pi-shaped lateral fissure, the two vertexes respectively making a vertical line towards the midline of the brain and calculating the distance, the two points respectively extending downwards to the inner edge of the skull to make a vertical line and calculating the distance, and the connecting line of the midline of the brain and the two vertexes above the pi-shaped lateral fissure infinitely extends the angle after angulation; measurement of brain islands: brain island area and perimeter.
In another embodiment, the quantitative analysis result to be determined for the lateral ventricle horizontal cross-sectional image in the craniocerebral sectional image dataset comprises: measurement of brain parenchyma: the area and perimeter of the brain parenchyma; thalamic measurements: the area and circumference of the thalamus; measurement of transparent cell: the area and perimeter of the transparent separate cavity, and the maximum transverse diameter of the transparent separate cavity; measurement of lateral ventricles posterior horn: circumference and area of the lateral ventricles posterior horn, width of the lateral ventricles posterior horn, angle of extension of the choroid plexus inner edge tangent to extension of the midline of the brain, measurement of lateral fissure: the area and the perimeter of the lateral fissure, the connecting line distance of two vertexes above the pi-shaped lateral fissure, the two vertexes respectively making a vertical line towards the midline of the brain and calculating the distance, the two points respectively extending downwards to the inner edge of the skull to make a vertical line and calculating the distance, and the connecting line of the midline of the brain and the two vertexes above the pi-shaped lateral fissure infinitely extends the angle after angulation; measurement of brain islands: brain island area and perimeter.
In another embodiment, the quantitative analysis result to be determined for the cerebellar horizontal cross-sectional image in the craniocerebral sectional image dataset comprises: measurement of brain parenchyma: the area and perimeter of the brain parenchyma; thalamic measurements: the area and circumference of the thalamus; measurement of transparent cell: the area and perimeter of the transparent separate cavity, and the maximum transverse diameter of the transparent separate cavity; measurement of lateral fissure: the area and the perimeter of the lateral fissure, the connecting line distance of two vertexes above the pi-shaped lateral fissure, the two vertexes respectively making a vertical line towards the midline of the brain and calculating the distance, the two points respectively extending downwards to the inner edge of the skull to make a vertical line and calculating the distance, and the connecting line of the midline of the brain and the two vertexes above the pi-shaped lateral fissure infinitely extends the angle after angulation; measurement of brain islands: brain island area and perimeter. It is understood that in other embodiments, the quantitative analysis result of each intracranial structure may include other results, and the quantitative analysis result to be output may be selected according to the requirement.
Further, in an embodiment, as shown in fig. 3, determining a result of quantitative analysis of each structure in the craniocerebral section image dataset based on the segmentation result corresponding to each structure in the craniocerebral section image dataset includes steps S310 and S320:
and S310, performing corresponding structure outer contour polygon fitting according to the segmentation result corresponding to each internal craniocerebral structure respectively to obtain an outer contour polygon fitting result corresponding to each internal craniocerebral structure.
The structural outer contour polygon fitting according to the segmentation result corresponding to each structure in the cranium can be realized in any mode. In a specific embodiment, the polygon fitting of the outer contour of the structure is realized by opencv.
In one embodiment, before performing the corresponding structure outer contour polygon fitting according to the segmentation result corresponding to each intracranial structure, the method further includes: and carrying out thresholding treatment on each preprocessed tangent plane image to obtain a binary image corresponding to each preprocessed tangent plane image.
And S320, determining the quantitative analysis result of each intracranial structure based on the outer contour polygon fitting result.
Further, in one embodiment, the quantitative analysis results include a perimeter analysis result and an area analysis result; in this embodiment, determining the perimeter analysis result and the area analysis result of each intracranial structure based on the outer contour polygon fitting result includes: respectively reading the number of contour points in the fitting result of the outer contour polygon corresponding to each intracranial structure to obtain the perimeter analysis result of each corresponding intracranial structure; and respectively obtaining the number of pixel points surrounded by the outer contour in the fitting result of the outer contour polygon corresponding to each internal craniocerebral structure to obtain the area analysis result of each corresponding internal craniocerebral structure.
In the embodiment, the corresponding outer contour is fitted according to the segmentation result corresponding to each intracranial structure, the perimeter of a curve of the outer contour polygon fitting result and the area of a region surrounded by the curve are calculated by using OpenCV, the perimeter analysis result is the number of contour point sets, and the area analysis result is the number of all pixel points in the image region surrounded by the outer contour; the final result is in units of pixels.
In another embodiment, the quantitative analysis results comprise angle analysis results; in this embodiment, determining the angle analysis result of each intracranial structure based on the outer contour polygon fitting result includes: and acquiring coordinates of each vertex in the fitting result of the outline polygon, and calculating an included angle of a vector formed by two adjacent vertices based on the coordinates of the vertices to obtain an angle analysis result.
In the embodiment, each intracranial structure is subjected to polygon fitting outer contour according to the segmentation result corresponding to each intracranial structure, and each vertex coordinate of the corresponding polygon can be obtained; furthermore, a vector formed by the two vertexes can be obtained according to the coordinates of the two adjacent vertexes, and then an included angle formed by the two vertexes can be calculated, which is an angle analysis result in the embodiment.
In another embodiment, the quantitative analysis results comprise distance analysis results; in this embodiment, determining the distance analysis result of each intracranial structure based on the outer contour polygon fitting result includes: and acquiring coordinates of each vertex in the fitting result of the outline polygon, and calculating the distance between two adjacent vertices based on the coordinates of the vertices to obtain a distance analysis result.
In the embodiment, each intracranial structure is subjected to polygon fitting outer contour according to the segmentation result corresponding to each intracranial structure, and each vertex coordinate of the corresponding polygon can be obtained; further, the distance between two adjacent vertex coordinates is the distance analysis result in this embodiment.
In the above embodiment, the outer contour polygon fitting is performed based on the segmentation result corresponding to each intracranial structure to obtain the outer contour polygon fitting result of each intracranial structure, and then the perimeter analysis result, the area analysis result, the angle analysis result, and the distance analysis result of each intracranial structure are determined according to the fitting result, wherein the angle analysis result and the distance analysis result can select the analysis result of a specific angle and a distance (such as a width, a maximum transverse diameter, and the like) between two positions that need to be output according to actual conditions.
In one embodiment, the mode of determination of the autoencoder includes the steps of: acquiring a craniocerebral section sample image; the craniocerebral section sample image comprises a negative sample image and a positive sample image, and the craniocerebral section sample image carries qualitative classification result labels for structures in the craniocerebral; and training a preset automatic encoder based on the craniocerebral section sample image to obtain the automatic encoder.
The negative sample image is a qualitative classification result label of normal development of the intracranial structure in the sample image, namely the negative sample image carries the qualitative classification result label of the normal development of the intracranial structure. The positive sample image is the intracranial dysplasia in the sample image, namely the positive sample image carries qualitative classification result annotation of the intracranial dysplasia. In one embodiment, the negative sample image is that the lateral fissure and the cerebral island in the sample image are normal in development, and the positive sample image is that the lateral fissure and the cerebral island in the sample image are abnormal in development. In one embodiment, the craniocerebral section sample image comprises a plurality of sample images, wherein the sample images comprise both negative sample images and positive sample images; in one embodiment, the craniocerebral section sample image comprises a craniocerebral section sample image, a transparent compartment horizontal section sample image, a thalamus horizontal section sample image, a lateral ventricle horizontal section sample image and a cerebellum horizontal section sample image; furthermore, the craniocerebral section sample images comprise craniocerebral section sample images corresponding to different pregnancy periods.
The automatic encoder obtained by training the preset automatic encoder based on the craniocerebral section sample image can be realized in any mode.
Further, in one embodiment, the classifier is determined in a manner comprising the steps of: inputting the sample image of the craniocerebral section into an automatic encoder to obtain the sample characteristics of the sample image of the craniocerebral section; and training a preset classifier based on the sample characteristics to obtain the classifier.
The craniocerebral section sample image is input into the automatic encoder determined by training in the embodiment, and the sample characteristics of the craniocerebral section sample image can be extracted. Training the preset classifier based on the sample characteristics to obtain the classifier can be realized in any mode. In one embodiment, the pre-set classifier is an untrained two-classification SVM classifier.
In the above embodiment, the preset automatic encoder is trained based on the obtained each craniocerebral section sample image, the automatic encoder determined by training is used for performing sample image feature extraction on each craniocerebral section sample image, and the extracted sample image features are used for training the preset classifier to obtain the classifier; the automatic encoder can be used for extracting the characteristics of each preprocessed section image obtained after preprocessing the acquired craniocerebral section image data set, and the classifier can be used for analyzing the image characteristics extracted from each preprocessed section image to obtain the qualitative classification result of the craniocerebral section image data set. The trained neural network is used for analyzing the image data set of the craniocerebral section, so that the analysis efficiency can be improved on the premise of ensuring the accuracy of an analysis result.
In one embodiment, the method for determining the split network in the above embodiment includes the steps of: acquiring a craniocerebral section sample image, wherein the craniocerebral section sample image carries quantitative analysis result labels for the internal structure of the craniocerebral; preprocessing the craniocerebral section sample image to obtain corresponding preprocessed section sample images; and training the preset segmentation network based on the preprocessed tangent plane sample image to obtain the segmentation network.
In this embodiment, the image of the intracranial sample carries a quantitative analysis result label of each intracranial structure contained therein, and in one embodiment, the quantitative analysis result label includes at least one of a perimeter analysis result label, an area analysis result label, an angle analysis result label, and a distance analysis result label. In one embodiment, the craniocerebral section sample image comprises a plurality of sample images; in one embodiment, the craniocerebral section sample image comprises a craniocerebral section sample image, a transparent compartment horizontal section sample image, a thalamus horizontal section sample image, a lateral ventricle horizontal section sample image and a cerebellum horizontal section sample image; furthermore, the craniocerebral section sample images comprise craniocerebral section sample images corresponding to different pregnancy periods.
In one embodiment, the craniocerebral section sample images are preprocessed to obtain corresponding preprocessed section sample images. Further, in one embodiment, the preprocessing the craniocerebral section sample image comprises: cutting the craniocerebral section sample image to obtain a cut section sample image; normalizing the cut section sample image to obtain a normalized section sample image; and carrying out random enhancement treatment on the normalized section sample image to obtain each preprocessed section sample image.
Wherein, in one embodiment, cropping the craniocerebral slice image dataset comprises: and respectively cutting each craniocerebral section image to obtain an effective section image area with a preset size. The original ultrasound sectional image contains interference information such as characters (such as time and equipment information) which are ineffective for training, so that the obtained original craniocerebral sectional image is cut to obtain an effective image area in the embodiment, wherein the deleting operation refers to cutting the original ultrasound sectional image and only the middle effective area is reserved.
Image normalization refers to a process of transforming an image into a fixed standard form by performing a series of standard processing transformations on the image, and the standard image is called a normalized image. In another embodiment, normalizing the cropped cut surface image includes normalizing the cropped image using a linear function. In a specific embodiment, the normalization processing on the cut-out section image comprises the operations of rotating, scaling, translating, cutting out, mirroring and the like on the image.
Data enhancement refers to changing the characteristics of a sample according to prior knowledge under the condition of keeping the label of the sample unchanged, so that a newly generated sample also conforms to or approximately conforms to the real distribution of data. In this embodiment, the normalized section image is subjected to random enhancement processing, so that the number of training samples can be increased, and performance improvement to a certain extent can be obtained.
In a specific embodiment, the segmentation network adopts a U-Net network, and when deep learning training is performed on a deep learning segmentation network U-Net data processing model, the method comprises the following steps:
1. the method comprises the steps of obtaining an ultrasonic section image of a set part of a fetus, and information of each anatomical structure and position parameter marked by a doctor, wherein key structures comprise skull aureole, a sickle of the brain, brain parenchyma, thalamus, a third ventricle, a lateral ventricle posterior horn, a lateral ventricle anterior horn, a choroid plexus, a lateral fissure of the brain, a transparent separation cavity, an upper temporal sulcus, a lower temporal sulcus, a fornix pillar, an occipital sulcus, a corpus callosum, a cerebral island, a lower frontal sulcus, a cerebellar hemisphere, a cerebellar lumbricus, a midbrain, a cerebellar, a midbrain aqueduct, a posterior fossa pool and a medulla oblongata interval. Preprocessing the acquired fetus ultrasonic section data set to obtain a preprocessed fetus ultrasonic section data set;
2. dividing the preprocessed fetal ultrasonic section data set into a training set, a verification set and a test set;
3. deep learning training is carried out on the preprocessed training set, the preprocessed verification set and the preprocessed test set, so that the segmentation network U-Net can carry out accurate segmentation on each structure of the ultrasonic section image of the fetus;
4. and the deep learning segmentation network U-Net outputs a segmentation model.
In the above embodiment, after the intracranial tangent plane sample image is acquired, the sample image is preprocessed and then used for training the neural network, an effective image area of the sample image can be obtained after the sample image is preprocessed, and through normalization processing and random image enhancement processing, on a limited data set, the number of trained samples can be increased through a data enhancement technology, and performance improvement to a certain extent is obtained.
In a specific embodiment, the method for analyzing the craniocerebral section image can be used for qualitatively analyzing whether the lateral fissure and the cerebral island are normally developed or not and quantitatively analyzing each intracranial structure to obtain the results of perimeter, area, angle, distance and the like. Fig. 4 is a schematic flow chart of the craniocerebral section image analysis method in this embodiment, which includes the following steps:
(1) acquiring a fetal craniocerebral section image;
(2) performing preprocessing operation on the fetal craniocerebral section image acquired in the step (1) to obtain a preprocessed fetal craniocerebral section image;
(3) respectively carrying out qualitative analysis and quantitative analysis on the preprocessed fetal craniocerebral section images obtained in the step (2), and respectively and correspondingly inputting the preprocessed fetal craniocerebral section images into a trained classifier and a segmentation neural network (U-Net) to obtain the categories (lateral fissure and cerebral island normality/lateral fissure and cerebral island abnormality) to which the fetal craniocerebral section images belong, the segmentation results of the lateral fissure and cerebral island structures contained in the fetal craniocerebral section images, and measurement results of parameters such as the perimeter, the area and the angle of the lateral fissure and the cerebral island according to the segmentation results; the classifier is obtained by training an intracranial section sample image carrying normal/abnormal marks of lateral fissure and a cerebral island; the segmentation network is obtained by training an intracranial tangent plane sample image labeled with a quantitative analysis result of each intracranial structure.
(4) Directly outputting the result of the lateral fissure and cerebral island normality for the fetal craniocerebral section image of which the category in the step (3) is the lateral fissure and cerebral island normality, and measuring the perimeter and the area of the lateral fissure and cerebral island structure in the fetal craniocerebral section image according to the output result of the segmentation network; and (4) directly outputting the result of the lateral fissure and cerebral island abnormity to the fetal craniocerebral section image with the lateral fissure and cerebral island abnormity classification in the step (3), and obtaining the measurement results of relevant parameters such as the area, the perimeter, the angle and the like according to the obtained segmentation result of the lateral fissure and cerebral island structure and other intracranial structures contained in the fetal craniocerebral section image.
All of the images shown in fig. 5-9 are schematic representations of images in a craniocerebral slice image dataset in some embodiments.
In the method for analyzing the image of the craniocerebral section, provided by the embodiment, the analysis of the lateral fissure and the cerebral island abnormality of the ultrasonic sectional image of the fetus is completed by the classifier to perform qualitative analysis and output an analysis result, and the U-Net model with the deep learning segmentation network is used to complete quantitative analysis and output a measurement result with accurate lateral fissure and cerebral island structures. After training is finished, manual operation is not needed any more, automatic qualitative analysis of unified standards is realized, and the problem that automatic anomaly detection is difficult to implement in actual clinic due to the fact that a large amount of time and economic cost are consumed in an existing manual anomaly judgment method is effectively solved. According to the method and the device, the abnormality of the ultrasonic section image data of the fetus is detected, the medical analysis time and cost of the image can be saved, and the diagnosis accuracy is improved.
Finally, the fetal craniocerebral section image analysis method provided by the application can be used for analyzing and obtaining whether the lateral fissure and the cerebral island of the fetus are abnormal (qualitative analysis result) according to the ultrasonic section images of the fetus in different gestational periods, and completing the quantitative measurement of the structure in each craniocerebral; furthermore, all samples used in the deep learning stage in the method are selected and accurately labeled by the sonographer according to clinical experience, so that the trained neural network model can learn the knowledge of the most experienced sonographer, and the objectivity and persuasiveness of the qualitative analysis result and the quantitative analysis result obtained by the method can be guaranteed.
It should be understood that although the various steps in the flow charts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 10, there is provided a craniocerebral section image analysis apparatus, comprising: an acquisition module 510, a preprocessing module 520, a feature extraction module 530, and a classification module 540, wherein:
an obtaining module 510, configured to obtain a craniocerebral section image dataset;
the preprocessing module 520 is used for preprocessing the craniocerebral section image data set to obtain each preprocessed section image;
a feature extraction module 530, configured to input each preprocessed sectional image into an automatic encoder determined through training, to obtain image features of each preprocessed sectional image;
the classification module 540 is configured to input the image features of the preprocessed tangent plane images into a classifier determined through training, so as to obtain a classification result of each preprocessed tangent plane image.
According to the craniocerebral section image analysis device, the obtained craniocerebral section image data set is preprocessed, the preprocessed image is input into an automatic encoder determined by training to obtain the image characteristics of each preprocessed section, and the image characteristics of each preprocessed section are input into a classifier determined by training to obtain the classification result of the craniocerebral section image data set. The device classifies the craniocerebral section image data set through the classifier determined by training to obtain the qualitative classification result of the craniocerebral section image data set, can effectively reduce the labor cost and the time cost while realizing the unified automatic qualitative analysis, and obviously improves the accuracy and the repeatability of the analysis result.
In one embodiment, the above apparatus further comprises: the segmentation module is used for segmenting the preprocessed sectional images based on the segmentation network determined through training to obtain segmentation results corresponding to the structures in the craniocerebral of the preprocessed sectional images; and the quantitative analysis module is used for determining the quantitative analysis result of each intracranial structure in the craniocerebral section image data set based on the segmentation result corresponding to each intracranial structure.
In one embodiment, the quantitative analysis module includes: the polygon fitting unit is used for performing corresponding structure outer contour polygon fitting according to the segmentation result corresponding to each internal craniocerebral structure respectively to obtain an outer contour polygon fitting result corresponding to each internal craniocerebral structure; in this embodiment, the quantitative analysis module is specifically configured to determine a quantitative analysis result of each intracranial structure based on the fitting result of the outer contour polygon.
In one embodiment, the quantitative analysis results include perimeter analysis results and area analysis results; the quantitative analysis module comprises: the circumference analysis unit is used for respectively reading the number of contour points in the fitting result of the outer contour polygon corresponding to each internal craniocerebral structure to obtain the circumference analysis result of each corresponding internal craniocerebral structure; and the area analysis unit is used for respectively obtaining the number of pixel points surrounded by the outer contour in the fitting result of the outer contour polygon corresponding to each internal craniocerebral structure to obtain the area analysis result of each corresponding internal craniocerebral structure.
In one embodiment, the quantitative analysis results comprise angle analysis results; the quantitative analysis module includes: and the angle analysis unit is used for acquiring coordinates of each vertex in the fitting result of the outline polygon, and calculating an included angle of a vector formed by two adjacent vertices based on the coordinates of the vertices to obtain an angle analysis result.
In one embodiment, the quantitative analysis results comprise distance analysis results; the quantitative analysis module includes: and the distance analysis unit is used for acquiring coordinates of each vertex in the fitting result of the outline polygon, and respectively calculating the distance between two adjacent vertices based on the coordinates of the vertices to obtain a distance analysis result.
In one embodiment, the above apparatus further comprises: the automatic encoder training module is used for acquiring a craniocerebral section sample image, the craniocerebral section sample image comprises a negative sample image and a positive sample image, and the craniocerebral section sample image carries qualitative classification result labels for structures in the craniocerebral; and training a preset automatic encoder based on the craniocerebral section sample image to obtain the automatic encoder.
Further, in one embodiment, the apparatus further comprises: the classifier training module is used for inputting the craniocerebral section sample image into the automatic encoder to obtain the sample characteristics of the craniocerebral section sample image; and training a preset classifier based on the sample characteristics to obtain the classifier.
In one embodiment, the above apparatus further comprises: the segmentation network training module is used for acquiring a craniocerebral section sample image which carries quantitative analysis result labels on the internal structure of the craniocerebral; preprocessing the craniocerebral section sample image to obtain a corresponding preprocessed section sample image; and training the preset segmentation network based on the preprocessed tangent plane sample image to obtain the segmentation network.
For the specific definition of the craniocerebral section image analysis device, reference may be made to the above definition of the craniocerebral section image analysis method, which is not described herein again. All or part of the modules in the craniocerebral section image analysis device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of craniocerebral sectional image analysis. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method of craniofacial image analysis in any of the above embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for craniofacial image analysis according to any of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of craniocerebral section image analysis, the method comprising:
acquiring a craniocerebral section image data set;
preprocessing the craniocerebral section image data set to obtain each preprocessed section image;
inputting each preprocessed tangent plane image into an automatic encoder determined through training to obtain the image characteristics of each preprocessed tangent plane image;
and inputting the image characteristics of each preprocessed section image into a classifier determined through training to obtain a qualitative classification result of the craniocerebral section image data set.
2. The method of claim 1, further comprising, after preprocessing the craniocerebral slice image dataset to obtain each preprocessed slice image:
segmenting each preprocessed tangent plane image based on a segmentation network determined through training to obtain a segmentation result corresponding to each craniocerebral structure in each preprocessed tangent plane image;
and determining the quantitative analysis result of each craniocerebral structure in the craniocerebral section image data set based on the corresponding segmentation result of each craniocerebral structure.
3. The method of claim 2, wherein determining a quantitative analysis result for each structure in the craniocerebral slice image dataset based on the segmentation result for each structure in the craniocerebral comprises:
respectively carrying out corresponding structural outer contour polygon fitting according to the segmentation result corresponding to each internal craniocerebral structure to obtain an outer contour polygon fitting result corresponding to each internal craniocerebral structure;
and determining a quantitative analysis result of each intracranial structure based on the outer contour polygon fitting result.
4. The method of claim 3, comprising at least one of:
in the first of these items, the first,
the quantitative analysis result comprises a circumference analysis result and an area analysis result;
determining perimeter analysis results and area analysis results of each of the intracranial structures based on the outer contour polygon fitting results, including:
respectively reading the number of contour points in the fitting result of the outer contour polygon corresponding to each intracranial structure to obtain the perimeter analysis result of each corresponding intracranial structure;
respectively obtaining the number of pixel points surrounded by the outer contour in the fitting result of the outer contour polygon corresponding to each internal craniocerebral structure to obtain the area analysis result of each corresponding internal craniocerebral structure;
in the second term, the first term is,
the quantitative analysis result comprises an angle analysis result;
determining an angle analysis result of each of the intracranial structures based on the outer contour polygon fitting result, including:
obtaining coordinates of each vertex in the fitting result of the outline polygon, and respectively calculating an included angle of a vector formed by two adjacent vertices based on the coordinates of the vertices to obtain an angle analysis result;
in the third item, the first and second items,
the quantitative analysis result comprises a distance analysis result;
determining a distance analysis result of each of the intracranial structures based on the outer contour polygon fitting result, including:
and acquiring coordinates of each vertex in the fitting result of the outer contour polygon, and respectively calculating the distance between two adjacent vertices based on the coordinates of the vertices to obtain a distance analysis result.
5. The method of claim 1, wherein the determination of the autoencoder comprises the steps of:
acquiring a craniocerebral section sample image, wherein the craniocerebral section sample image comprises a negative sample image and a positive sample image, and the craniocerebral section sample image carries qualitative classification result labels for internal structures of the craniocerebral;
and training a preset automatic encoder based on the craniocerebral section sample image to obtain the automatic encoder.
6. The method of claim 5, wherein the classifier is determined by a method comprising the steps of:
inputting the sample image of the craniocerebral section into the automatic encoder to obtain the sample characteristics of the sample image of the craniocerebral section;
and training a preset classifier based on the sample characteristics to obtain the classifier.
7. The method of claim 2, wherein the determining the split network comprises:
acquiring a craniocerebral section sample image, wherein the craniocerebral section sample image carries quantitative analysis result labels for internal structures of the craniocerebral;
preprocessing the craniocerebral section sample image to obtain a corresponding preprocessed section sample image;
and training a preset segmentation network based on the preprocessed tangent plane sample image to obtain the segmentation network.
8. A craniocerebral section image analysis apparatus, the apparatus comprising:
the acquisition module is used for acquiring a craniocerebral section image dataset;
the preprocessing module is used for preprocessing the craniocerebral section image data set to obtain each preprocessed section image;
the feature extraction module is used for inputting each preprocessed tangent plane image into an automatic encoder determined through training to obtain the image features of each preprocessed tangent plane image;
and the classification module is used for inputting the image characteristics of each preprocessed section image into a classifier determined through training to obtain a qualitative classification result of the craniocerebral section image data set.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010788465.3A 2020-08-07 2020-08-07 Abnormal judgment analysis method and device for fetal craniocerebral section image Active CN111899253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010788465.3A CN111899253B (en) 2020-08-07 2020-08-07 Abnormal judgment analysis method and device for fetal craniocerebral section image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010788465.3A CN111899253B (en) 2020-08-07 2020-08-07 Abnormal judgment analysis method and device for fetal craniocerebral section image

Publications (2)

Publication Number Publication Date
CN111899253A true CN111899253A (en) 2020-11-06
CN111899253B CN111899253B (en) 2024-06-07

Family

ID=73247273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010788465.3A Active CN111899253B (en) 2020-08-07 2020-08-07 Abnormal judgment analysis method and device for fetal craniocerebral section image

Country Status (1)

Country Link
CN (1) CN111899253B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633378A (en) * 2020-12-24 2021-04-09 电子科技大学 Intelligent detection method and system for multimodal image fetus corpus callosum
CN114419031A (en) * 2022-03-14 2022-04-29 深圳科亚医疗科技有限公司 Automatic positioning method and device for midline of brain
CN114463288A (en) * 2022-01-18 2022-05-10 深圳市铱硙医疗科技有限公司 Brain medical image scoring method, device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN110448335A (en) * 2019-07-11 2019-11-15 暨南大学 A kind of fetus head circumference full-automatic measuring method and device based on ultrasound image
CN110570434A (en) * 2018-06-06 2019-12-13 杭州海康威视数字技术股份有限公司 image segmentation and annotation method and device
CN110613483A (en) * 2019-09-09 2019-12-27 李胜利 Method and system for detecting fetal craniocerebral abnormality based on machine learning
CN110634125A (en) * 2019-01-14 2019-12-31 广州爱孕记信息科技有限公司 Deep learning-based fetal ultrasound image identification method and system
CN111242929A (en) * 2020-01-13 2020-06-05 中国科学技术大学 Fetal skull shape parameter measuring method, system, equipment and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780448A (en) * 2016-12-05 2017-05-31 清华大学 A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features
CN110570434A (en) * 2018-06-06 2019-12-13 杭州海康威视数字技术股份有限公司 image segmentation and annotation method and device
CN110634125A (en) * 2019-01-14 2019-12-31 广州爱孕记信息科技有限公司 Deep learning-based fetal ultrasound image identification method and system
CN110448335A (en) * 2019-07-11 2019-11-15 暨南大学 A kind of fetus head circumference full-automatic measuring method and device based on ultrasound image
CN110613483A (en) * 2019-09-09 2019-12-27 李胜利 Method and system for detecting fetal craniocerebral abnormality based on machine learning
CN111242929A (en) * 2020-01-13 2020-06-05 中国科学技术大学 Fetal skull shape parameter measuring method, system, equipment and medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633378A (en) * 2020-12-24 2021-04-09 电子科技大学 Intelligent detection method and system for multimodal image fetus corpus callosum
CN112633378B (en) * 2020-12-24 2022-06-28 电子科技大学 Intelligent detection method and system for multi-modal image fetal corpus callosum
CN114463288A (en) * 2022-01-18 2022-05-10 深圳市铱硙医疗科技有限公司 Brain medical image scoring method, device, computer equipment and storage medium
CN114463288B (en) * 2022-01-18 2023-01-10 深圳市铱硙医疗科技有限公司 Brain medical image scoring method and device, computer equipment and storage medium
CN114419031A (en) * 2022-03-14 2022-04-29 深圳科亚医疗科技有限公司 Automatic positioning method and device for midline of brain
CN114419031B (en) * 2022-03-14 2022-06-14 深圳科亚医疗科技有限公司 Automatic positioning method and device for midline of brain

Also Published As

Publication number Publication date
CN111899253B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
US10952613B2 (en) Stroke diagnosis and prognosis prediction method and system
EP3391284B1 (en) Interpretation and quantification of emergency features on head computed tomography
CN111899253B (en) Abnormal judgment analysis method and device for fetal craniocerebral section image
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN111563897B (en) Breast nuclear magnetic image tumor segmentation method and device based on weak supervision learning
CN112164082A (en) Method for segmenting multi-modal MR brain image based on 3D convolutional neural network
CN110613483B (en) System for detecting fetal craniocerebral abnormality based on machine learning
CN110717905B (en) Brain image detection method, computer device, and storage medium
CN112365973B (en) Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN113077479A (en) Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus
JP2020166809A (en) System, apparatus, and learning method for training models
US20220284578A1 (en) Image processing for stroke characterization
CN111681230A (en) System and method for scoring high-signal of white matter of brain
CN115115841B (en) Shadow spot image processing and analyzing method and system
CN112508902A (en) White matter high signal grading method, electronic device and storage medium
CN114332132A (en) Image segmentation method and device and computer equipment
CN111951268B (en) Brain ultrasound image parallel segmentation method and device
EP3381010B1 (en) Process for processing medical images of a face for recognition of facial dysmorphisms
CN110991408B (en) Method and device for segmenting white matter high signal based on deep learning method
CN115063395A (en) Ultrasonic image processing method, device, equipment and medium
Jacob et al. Tibia bone segmentation in X-ray images-a comparative analysis
CN112991289A (en) Method and device for processing standard image section
CN114463288B (en) Brain medical image scoring method and device, computer equipment and storage medium
US20230394655A1 (en) System and method for evaluating or predicting a condition of a fetus
CN117764929A (en) Ultrasonic image standard section identification method, device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211111

Address after: 510515 No. 1023-1063, shatai South Road, Guangzhou, Guangdong

Applicant after: SOUTHERN MEDICAL University

Applicant after: HUNAN University

Address before: 410000 room 515a266, block BCD, Lugu business center, No. 199 Lulong Road, Changsha high tech Development Zone, Changsha, Hunan (cluster registration)

Applicant before: Changsha Datang Information Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230510

Address after: 518000, 6th Floor, Building A3, Nanshan Zhiyuan, No. 1001 Xueyuan Avenue, Changyuan Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Province

Applicant after: Shenzhen Lanxiang Zhiying Technology Co.,Ltd.

Address before: No.1023-1063, shatai South Road, Guangzhou, Guangdong 510515

Applicant before: SOUTHERN MEDICAL University

Applicant before: HUNAN University

TA01 Transfer of patent application right
GR01 Patent grant