CN109509177B - Method and device for recognizing brain image - Google Patents

Method and device for recognizing brain image Download PDF

Info

Publication number
CN109509177B
CN109509177B CN201811227495.6A CN201811227495A CN109509177B CN 109509177 B CN109509177 B CN 109509177B CN 201811227495 A CN201811227495 A CN 201811227495A CN 109509177 B CN109509177 B CN 109509177B
Authority
CN
China
Prior art keywords
brain image
image
brain
key
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811227495.6A
Other languages
Chinese (zh)
Other versions
CN109509177A (en
Inventor
鲁文斌
魏子昆
华铱炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yitu Medical Technology Co ltd
Original Assignee
Hangzhou Yitu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yitu Medical Technology Co ltd filed Critical Hangzhou Yitu Medical Technology Co ltd
Priority to CN201811227495.6A priority Critical patent/CN109509177B/en
Publication of CN109509177A publication Critical patent/CN109509177A/en
Application granted granted Critical
Publication of CN109509177B publication Critical patent/CN109509177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The embodiment of the application provides a method and a device for brain image identification, wherein the method comprises the following steps: acquiring a brain image of a user to be detected; determining a key frame of the brain image according to the brain image; determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model; and determining the partition of the key frame of the brain image according to the position of the key positioning point.

Description

Method and device for recognizing brain image
Technical Field
The embodiment of the invention relates to the technical field of machine learning, in particular to a method and a device for recognizing brain images.
Background
"cerebral apoplexy" (or "stroke") is an acute cerebrovascular disease, which is a group of diseases caused by brain tissue damage due to sudden rupture of cerebral vessels or blood flow failure to the brain due to vessel occlusion, including ischemic and hemorrhagic stroke. Statistical data show that the incidence rate of stroke in China is rising at a rate of 8.7% per year, and cerebrovascular diseases become the first cause of death of Chinese residents.
For stroke patients, the "time window" from onset to treatment is critical to reducing mortality and disability rates. The 'time window' for stroke treatment is very short, and usually starts within 3 hours or 4.5 hours of attack, so that hospitals need to try to shorten the intermediate link to the best, rescue time is strived for patients, and CT image shooting becomes the best examination means with high efficiency and economy. However, the method of manual interpretation and judgment is used for diagnosing the stroke, which takes too long time and easily causes the condition of an illness to be delayed, and the condition of an illness is easily judged inaccurately due to artificial differences because of the need of experienced doctors during diagnosis.
In the prior art, the condition of a patient is judged mainly by a method for manually checking brain images, the accuracy is affected by subjective factors of people, and the efficiency is low.
Disclosure of Invention
The method mainly comprises the steps of judging the state of an illness by a method of manually checking brain images in the prior art, and solving the problems that the accuracy is affected by human subjective factor images and the efficiency is low.
In a first aspect, an embodiment of the present invention provides a method for brain image identification, including:
acquiring a brain image of a user to be detected;
determining a key frame of the brain image according to the brain image;
determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model;
and determining the partition of the key frame of the brain image according to the position of the key positioning point.
In one possible implementation, the brain imagery includes N frames of brain images; n is a positive integer; the determining a key frame of the brain image according to the brain image includes:
simultaneously inputting the brain images adjacent to the brain image position into a preset classifier, and determining the confidence degree of the brain image as the key frame of the brain image;
and determining the key frame of the brain image according to the confidence of each frame of brain image.
In one possible implementation, the positioning model includes a coarse positioning module and a fine positioning module; the determining the position of the key positioning point in the key frame of the brain image by using the positioning model includes:
inputting the key frame of the brain image into the coarse positioning module, and determining the coordinate of a first key point in the key frame of the brain image;
determining one or more coarse segmentation regions in a key frame of the brain image according to the coordinates of the first key point;
for each coarse segmentation region, inputting the coarse segmentation region into a corresponding fine positioning module, and determining the coordinate of a second key point in the coarse segmentation region;
and taking the coordinates of the second key point as the position of a key positioning point in the key frame of the brain image.
One possible implementation manner of acquiring a brain image of a user to be detected includes:
acquiring an original brain image file of the user to be detected;
and outputting each frame of brain image in the original brain image file into a picture format with a preset size and a set window width and window level.
In a second aspect, an embodiment of the present invention provides an apparatus for brain image recognition, including:
the acquisition module is used for acquiring a brain image of a user to be detected;
the processing module is used for determining a key frame of the brain image according to the brain image; determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model; and determining the partition of the key frame of the brain image according to the position of the key positioning point.
In one possible implementation, the brain imagery includes N frames of brain images; n is a positive integer; the processing module is specifically configured to:
simultaneously inputting the brain images adjacent to the brain image position into a preset classifier, and determining the confidence degree of the brain image as the key frame of the brain image; and determining the key frame of the brain image according to the confidence of each frame of brain image.
In one possible implementation, the positioning model includes a coarse positioning module and a fine positioning module; the processing module is specifically configured to:
inputting the key frame of the brain image into the coarse positioning module, and determining the coordinate of a first key point in the key frame of the brain image; determining one or more coarse segmentation regions in a key frame of the brain image according to the coordinates of the first key point; for each coarse segmentation area, inputting the coarse segmentation area into a corresponding fine positioning module, and determining the coordinate of a second key point in the coarse segmentation area; and taking the coordinates of the second key point as the position of a key positioning point in the key frame of the brain image.
In one possible implementation manner, the obtaining module is configured to:
and acquiring an original brain image file of the user to be detected, and outputting each frame of brain image in the original brain image file into a picture format with a preset size and a set window width and window level.
In a third aspect, an embodiment of the present application provides an apparatus for brain image recognition, including at least one processor and at least one memory, where the storage unit stores a computer program, and when the program is executed by the processor, the processor is caused to perform the steps of the method of the first aspect.
In a fourth aspect, the present embodiments provide a computer readable medium storing a computer program executable by a brain image recognition apparatus, when the program is run on the brain image recognition apparatus, the brain image recognition apparatus is caused to perform the steps of the method of the first aspect.
In the embodiment of the invention, the condition of the brain image of the user to be detected is determined based on the subareas, and the condition of the user does not need to be judged according to the brain image in a subjective way, so that the accuracy of brain image identification is improved on one hand, and the efficiency of brain image identification is improved on the other hand.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a method for brain image recognition according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a brain image according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for brain image recognition according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for brain image recognition according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an apparatus for brain image recognition according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
For convenience of understanding, terms referred to in the embodiments of the present invention are explained below.
The ultra-early thrombolysis is recognized as the most effective treatment means for cerebral thrombosis, but no recognized standard exists at present for the most suitable thrombolysis treatment of patients. The Alberta Stroke project proposed a CT-based semi-quantitative method of Early ischemic lesions — ASPECTS (Alberta Stroke Program Early CT score) score to select patients for appropriate thrombolytic therapy. The ASPECTS is a scale suitable for evaluating ischemic lesions in the Middle Cerebral Artery (MCA) supply area, and comprises a basal ganglia level of four parts, namely a putamen, a caudate nucleus, an inner capsule and an island leaf, and six cortical areas, namely M1-M3 and M4-M6, which are 10 areas. The score is 10 full, and the lesion of each found area is one point. An ASPECTS score of 10 points suggests that the MCA blood supply area has no early ischemic signs; patients with an ASPECTS score of 7-9 have high possibility of vascular occlusion, and have low risk of hemorrhage after thrombolysis, so that the patients are the best targets for thrombolysis treatment; when the ASPECTS is less than 7 minutes, because the range of ischemia accumulation is large, the possibility of thrombolysis causing intracranial hemorrhage is high, and the thrombolysis is not suitable.
In conclusion, the ASPECTS subarea of the brain is accurately identified, so that the time required by the diagnosis of whether the stroke of the patient is suitable for thrombolytic therapy or not can be effectively reduced, and the curative effect of the patient is improved.
Based on the above problem, fig. 1 schematically illustrates a flow chart of a method for brain image identification according to an embodiment of the present application, where the flow chart may be executed by an apparatus for brain image identification, and specifically includes the following steps:
step S101, a brain image of a user to be detected is obtained.
The brain image refers to a specific image taken by X-ray, for example, a CT image. The image may be a three-dimensional image or a two-dimensional image. Illustratively, the brain image may be as shown in fig. 2.
After obtaining the brain image of the user to be detected, preprocessing the brain image, wherein the preprocessing process mainly comprises image normalization.
The image normalization comprises the following steps: the brain image can be a DICOM-format original brain image file, and each frame of brain image in the original brain image file is output to a picture format with a preset size and a set window width and window level according to each frame of brain image in the DICOM-format brain image file.
Specifically, a set window width level may be selected, for example, the window width W is 80, the window level L is 40, and the window width W is converted into the brain image of each frame in the PNG format. The brain image of each frame is interpolated and scaled to a set size, for example 512 x 512 pixels. In one possible implementation, the aspect ratio of the brain image may also be adjusted to 1:1 by adding black borders on the upper side or both sides of the brain image.
Step S102, determining a key frame of the brain image according to the brain image.
In order to reduce the calculation amount of the model, the embodiment of the present invention may adopt a 2-dimensional convolutional neural network, which is a possible implementation manner, as shown in fig. 3, the brain image includes N frames of brain images; n is a positive integer; the method can comprise the following steps:
step S301, inputting the brain images adjacent to the brain image position into a preset classifier simultaneously, and determining the brain image as the confidence of the key frame of the brain image;
for example, from frame 3 to the third last frame, each frame is spliced with the two frames to form a brain image including 5 brain images. Of course, the frames in the brain image may be selected according to the need, and are not limited herein.
Step S302, determining the key frame of the brain image according to the confidence of the key frame of each frame of the brain image.
Specifically, the brain image and the brain image adjacent to the brain image at each input may be input into a classifier for classification prediction, and each class is used to determine whether the brain image is a corresponding predefined ASPECTS key frame.
And according to the confidence coefficient of each frame, taking the brain image of the frame with the highest confidence coefficient as an ASPECTS key frame.
By selecting the key frame of the brain image, the ASPECTS partition is divided in the brain image of the key frame, and the partition efficiency is improved. In addition, the input brain images comprise brain images of adjacent frames, so that the position information in the brain images can be effectively utilized, the accuracy of key frame judgment is improved, the calculation amount required by calculating the 3-dimensional brain images by using the 3-dimensional convolution neural network model is reduced, and the identification efficiency is improved.
Illustratively, in the brain image shown in fig. 2, it may include: the four parts of the bean-shaped nucleus, the caudate nucleus, the inner capsule and the island leaf at the level of the basal ganglia, and the six cortical areas of M1-M3 and M4-M6.
It should be noted that, in the brain image shown in fig. 2, the identified regions are only the regions identified as an example, and the regions in the brain may not be limited to these.
And step S103, determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model.
The positioning model may be a trained convolutional neural network CNN.
And step S104, determining the partition of the key frame of the brain image according to the position of the key positioning point.
Because lesions at different positions of the brain have special significance, the positions of the partitions are automatically determined by adopting the positioning model in the embodiment of the application, and then the partitions are determined by combining the characteristics of the partitions in each key frame and the positions of the key frames, and the partitions are judged without artificially and subjectively integrating all brain images, so that the accuracy of brain image identification is improved on one hand, and the efficiency of the brain image identification is improved on the other hand.
Further, to improve the positioning accuracy, in one possible implementation, the brain image may be divided into at least one pair of a first region and a second region; the first area and the second area are symmetrical areas; for example, the left brain of the brain image is taken as the first region and the right brain is taken as the second region; in step S103, it may include:
rotating a first image of a second region of the brain image to a position overlapping a second image of a first region of the brain image as a reference image of the second image of the first region;
inputting the second image and the reference image of the second image into the positioning model, and determining the position of each key positioning point in the second image;
rotating a second image of a first region of the brain image to a position overlapping the first image of the second region of the brain image as a reference image of the first image of the second region;
and inputting the first image and the reference image of the first image into the positioning model, and determining the position of each key positioning point in the first image.
Through the input of the reference image, the symmetrical information in the brain image is utilized, the influence of interference factors such as focus, noise point and the like on the brain region positioning is eliminated, and the positioning accuracy is improved.
Optionally, in step S101, since the position of the brain in the acquired brain image is sometimes not the reference position, thereby affecting the position of the subsequent positioning key positioning point, after the brain image of the user to be detected is acquired, the method adjusts the brain in the brain image to the reference position, and specifically includes the following steps:
step one, obtaining the coordinates of a preset reference point.
The coordinates of the preset reference points may be coordinates of a preset partial skull to represent a reference position of the partial skull.
And step two, determining the coordinates of the key points in the brain images by adopting an adjusting model, wherein the adjusting model is determined after training the convolutional neural network by taking a plurality of brain images marked with the coordinates of the key points in advance as training samples.
The coordinates of the key point and the coordinates of the preset reference point are coordinates in the same coordinate system.
Illustratively, a plurality of brain images are acquired, then key points of a skull in the brain images are marked manually, then the brain images marked with the key points are input into a convolutional neural network for training, and when an objective function of the convolutional neural network meets a preset condition, an adjustment model is determined. When the brain image of the user to be detected is obtained, the brain image is input into the adjustment model, and the key point in the brain image is determined.
And step three, determining the corresponding relation between the current position of the key point of the skull of the brain image and the reference position according to the coordinates of the preset reference point and the coordinates of the key point.
The corresponding relationship comprises a translation relationship and a rotation relationship.
And step four, adjusting the brain in the brain image to the reference position according to the corresponding relation.
Then, each bone in the brain image is adjusted to the reference position according to the corresponding relation. The positioning model is obtained by training the brain image at the reference position as a training sample, so that the brain in the brain image of the user to be detected is adjusted to the reference position before the position of each subarea in the brain image is positioned, and the accuracy of positioning the target is improved.
In the second step, the positioning model comprises a coarse positioning module and a fine positioning module, both of which are convolutional neural networks, wherein the coarse positioning module is determined after training the convolutional neural networks by using a plurality of brain images with key points marked in advance as training samples. The fine positioning module takes brain images of a plurality of coarse segmentation areas with key points marked in advance as training samples, and determines the brain images after training the convolutional neural network, and the number of the fine positioning modules is determined according to the number of the coarse segmentation areas output by the coarse positioning module.
The training process of the coarse positioning module is specifically described as follows: multiple brain images were acquired as training samples. For each brain image, preprocessing the brain image, and then adjusting the brain in the brain image to the reference position, wherein the process of adjusting the position of the brain image and preprocessing the brain image is described in the foregoing, and is not described herein again. And marking key points in each brain image by a marking person, wherein the key points are key positioning points of the brain in the brain image, and each key positioning point corresponds to one serial number. Then, performing data enhancement on the training sample to enhance the data volume to 10 times of the original data volume, wherein the data enhancement method comprises the following steps of: 1. and randomly rotating for a certain angle. 2. And randomly shifting 0-30 pixels up, down, left and right. 3. And randomly scaling by 0.85-1.15 times. 4. The image contrast and brightness are dithered by a small amount.
And then inputting the training sample into a convolutional neural network for training. During training, a loss function is calculated according to the coordinates of the labeled key points and the coordinates of the key points predicted by the network, and through training by a back propagation method, a random Gradient Descent (SGD) optimization algorithm with momentum and step attenuation can be used as a training optimization algorithm.
Optionally, the structure of the convolutional neural network includes M consecutive convolutional feature extraction blocks and a full connection layer. The convolution characteristic extraction block comprises L convolution modules, wherein L is larger than 0, and any convolution module comprises a convolution layer, a BN layer and a ReLU layer. M and L are positive integers, and specific values can be determined according to needs, which are not limited herein.
The training process of the fine positioning module is described in detail as follows: multiple brain images were acquired as training samples. The brain images in the plurality of brain images may be only key frames of the brain images, or may include all the brain images in the brain images, which is not limited herein. And aiming at each brain image, adjusting the brain in the brain image to a reference position, and then preprocessing the brain image. And segmenting a plurality of roughly segmented regions from the brain image according to a preset rule, for example, segmenting roughly segmented regions such as a left brain region and a right brain region. A fine positioning module is trained for each of the coarsely partitioned regions. For any one fine positioning module, a labeling person labels key points in the brain image of each coarse segmentation region, or the labeled key points in the training samples corresponding to the coarse segmentation modules can be directly used, the key points are points in the coarse segmentation regions, and each key point corresponds to one sequence number. The brain image of the roughly segmented region is then adjusted to an orthostatic image. Then, performing data enhancement on the training sample to enhance the data volume to 10 times of the original data volume, wherein the data enhancement method comprises the following steps of: 1. randomly translating the pixels by 0-20 pixels up, down, left and right; 2. randomly rotating for-20 to-20 degrees; 3. and randomly scaling by 0.8-1.2 times.
And then inputting the training sample into a convolutional neural network for training. During training, a loss function is calculated according to the coordinates of the labeled key points and the coordinates of the key points predicted by the network, training is performed by a back propagation method, and a training optimization algorithm can use a Stochastic Gradient Descent (SGD) algorithm with momentum and step attenuation. The convolutional neural network comprises M continuous convolutional feature extraction blocks and a full connection layer.
After the coarse positioning module and the fine positioning module in the positioning model are determined by adopting the method, the positions of all the partitions in the brain image are determined by adopting the positioning model, and the method specifically comprises the following steps:
inputting the key frame of the brain image into the coarse positioning module, and determining the coordinate of a first key point in the key frame of the brain image;
optionally, the brain image is a brain slice, the first key points may be key points of various partitions in the brain, and each first key point corresponds to one sequence number.
Determining one or more coarse segmentation areas in the key frame of the brain image according to the coordinates of the first key point;
the roughly divided region may include: the four parts of the bean-shaped nucleus, the tail-shaped nucleus, the inner capsule and the island leaf at the basal ganglia level and six cortical areas of M1-M3 and M4-M6, which are 10 areas in total; of course, the specific position of the roughly divided region and the region included in the roughly divided region may be preset, and are not limited herein. The accuracy of the coordinate positioning of the first key point can be further improved by setting the rough segmentation area.
Inputting the roughly divided regions into corresponding fine positioning modules aiming at each roughly divided region, and determining the coordinates of second key points in the roughly divided regions;
and step four, taking the coordinates of the second key point as the position of a key positioning point in the key frame of the brain image.
When the positioning model positions each brain region, the coarse positioning module is firstly adopted to segment the coarse segmented region from the brain image, the positioning range is reduced, and then the position of each brain region is positioned from the coarse segmented region, so that the positioning precision is improved.
Optionally, in the third step, before inputting the roughly segmented region into the corresponding fine positioning module, the brain image of the roughly segmented region is adjusted to meet the requirement of the fine positioning module.
In the fourth step, after the coordinates of the second keypoints corresponding to each partition in the roughly divided area are determined, the second keypoints corresponding to each partition are determined according to the serial numbers of the second keypoints, and then the partition edges are determined according to the coordinates of the second keypoints.
Specifically, the edge of the partition may be an aspect line of the ASPECTS partition, which is formed by drawing the anchor point connection.
Based on the same technical concept, the embodiment of the present application provides an apparatus for brain image recognition, as shown in fig. 4, the apparatus 400 includes:
an obtaining module 401, configured to obtain a brain image of a user to be detected;
a processing module 402, configured to determine a key frame of the brain image according to the brain image; determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model; and determining the partition of the key frame of the brain image according to the position of the key positioning point.
In one possible implementation, the brain imagery includes N frames of brain images; n is a positive integer; the processing module 402 is specifically configured to:
inputting the brain image and the brain image adjacent to the position of the brain image into a preset classifier simultaneously, and determining the brain image as the confidence coefficient of the key frame of the brain image; and determining the key frame of the brain image according to the confidence of each frame of brain image.
In one possible implementation, the positioning model includes a coarse positioning module and a fine positioning module; the processing module 402 is specifically configured to:
inputting the key frame of the brain image into the coarse positioning module, and determining the coordinate of a first key point in the key frame of the brain image; determining one or more coarse segmentation regions in a key frame of the brain image according to the coordinates of the first key point; for each coarse segmentation area, inputting the coarse segmentation area into a corresponding fine positioning module, and determining the coordinate of a second key point in the coarse segmentation area; and taking the coordinates of the second key point as the position of a key positioning point in the key frame of the brain image.
In a possible implementation manner, the obtaining module 401 is configured to:
and acquiring an original brain image file of the user to be detected, and outputting each frame of brain image in the original brain image file into a picture format with a preset size and a set window width and window level.
Based on the same technical concept, the present application provides an apparatus for brain image recognition, as shown in fig. 5, including at least one processor 501 and a memory 502 connected to the at least one processor, where a specific connection medium between the processor 501 and the memory 502 is not limited in this application, and the processor 501 and the memory 502 are connected through a bus in fig. 5 as an example. The bus may be divided into an address bus, a data bus, a control bus, etc.
In the embodiment of the present application, the memory 502 stores instructions executable by the at least one processor 501, and the at least one processor 501 may execute the steps included in the method for brain image recognition described above by executing the instructions stored in the memory 502.
The processor 501 is a control center of the brain image recognition device, and may connect various parts of the brain image recognition device through various interfaces and lines, and implement the brain image recognition by operating or executing the instructions stored in the memory 502 and calling the data stored in the memory 502. Optionally, the processor 501 may include one or more processing units, and the processor 501 may integrate an application processor and a modem processor, wherein the application processor mainly handles an operating system, a user interface, an application program, and the like, and the modem processor mainly handles wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 501. In some embodiments, processor 501 and memory 502 may be implemented on the same chip, or in some embodiments, they may be implemented separately on separate chips.
The processor 501 may be a general-purpose processor, such as a Central Processing Unit (CPU), a digital signal processor, an Application Specific Integrated Circuit (ASIC), a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, and may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 502 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 502 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 502 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Based on the same inventive concept, the present application also provides a computer readable medium storing a computer program executable by a brain image recognition apparatus, which when running on the brain image recognition apparatus, causes the brain image recognition apparatus to perform the steps of the method for brain image recognition.
It should be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for brain image recognition, comprising:
acquiring a brain image of a user to be detected;
determining a key frame of the brain image according to the brain image;
determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model; the method specifically comprises the following steps: the brain image comprises N frames of brain images, N is a positive integer, the brain images comprise at least one pair of a first region and a second region, and the first region and the second region are symmetrical regions; rotating a first image of a second region of the brain image to a position overlapping a second image of a first region of the brain image as a reference image of the second image of the first region; inputting the second image and the reference image of the second image into the positioning model, and determining the position of each key positioning point in the second image;
and determining the partition of the key frame of the brain image according to the position of the key positioning point.
2. The method of claim 1, wherein determining the keyframe of the brain image from the brain image comprises:
simultaneously inputting the brain images adjacent to the brain image position into a preset classifier, and determining the confidence degree of the brain image as the key frame of the brain image;
and determining the key frame of the brain image according to the confidence of each frame of brain image.
3. The method of claim 1, wherein the positioning model comprises a coarse positioning module and a fine positioning module; the determining the position of the key positioning point in the key frame of the brain image by using the positioning model includes:
inputting the key frame of the brain image into the coarse positioning module, and determining the coordinate of a first key point in the key frame of the brain image;
determining one or more coarse segmentation regions in a key frame of the brain image according to the coordinates of the first key point;
for each coarse segmentation region, inputting the coarse segmentation region into a corresponding fine positioning module, and determining the coordinate of a second key point in the coarse segmentation region;
and taking the coordinates of the second key point as the position of a key positioning point in the key frame of the brain image.
4. The method of claim 1, wherein the acquiring the brain image of the user to be detected comprises:
acquiring an original brain image file of the user to be detected;
and outputting each frame of brain image in the original brain image file into a picture format with a preset size and a set window width and window level.
5. An apparatus for brain image recognition, comprising:
the acquisition module is used for acquiring a brain image of a user to be detected;
the processing module is used for determining a key frame of the brain image according to the brain image; determining the position of each key positioning point in the key frame of the brain image by adopting a positioning model; the method specifically comprises the following steps: the brain image comprises N frames of brain images, N is a positive integer, the brain images comprise at least one pair of a first region and a second region, and the first region and the second region are symmetrical regions; rotating a first image of a second region of the brain image to a position overlapping a second image of a first region of the brain image as a reference image of the second image of the first region; inputting the second image and the reference image of the second image into the positioning model, and determining the position of each key positioning point in the second image; and determining the partition of the key frame of the brain image according to the position of the key positioning point.
6. The apparatus of claim 5, wherein the processing module is specifically configured to:
inputting the brain image and the brain image adjacent to the position of the brain image into a preset classifier simultaneously, and determining the brain image as the confidence coefficient of the key frame of the brain image; and determining the key frame of the brain image according to the confidence of each frame of brain image.
7. The apparatus of claim 5, wherein the positioning model comprises a coarse positioning module and a fine positioning module; the processing module is specifically configured to:
inputting the key frame of the brain image into the coarse positioning module, and determining the coordinate of a first key point in the key frame of the brain image; determining one or more coarse segmentation regions in a key frame of the brain image according to the coordinates of the first key point; for each coarse segmentation area, inputting the coarse segmentation area into a corresponding fine positioning module, and determining the coordinate of a second key point in the coarse segmentation area; and taking the coordinates of the second key point as the position of a key positioning point in the key frame of the brain image.
8. The apparatus of claim 5, wherein the acquisition module is to:
and acquiring an original brain image file of the user to be detected, and outputting each frame of brain image in the original brain image file into a picture format with a preset size and a set window width and window level.
9. An apparatus for brain image recognition, comprising at least one processor and at least one memory, wherein the memory stores a computer program which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 4.
10. A computer-readable medium storing a computer program executable by a brain image recognition apparatus, the program, when executed on the brain image recognition apparatus, causing the brain image recognition apparatus to perform the steps of any one of the methods of claims 1-4.
CN201811227495.6A 2018-10-22 2018-10-22 Method and device for recognizing brain image Active CN109509177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811227495.6A CN109509177B (en) 2018-10-22 2018-10-22 Method and device for recognizing brain image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811227495.6A CN109509177B (en) 2018-10-22 2018-10-22 Method and device for recognizing brain image

Publications (2)

Publication Number Publication Date
CN109509177A CN109509177A (en) 2019-03-22
CN109509177B true CN109509177B (en) 2021-02-23

Family

ID=65746778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811227495.6A Active CN109509177B (en) 2018-10-22 2018-10-22 Method and device for recognizing brain image

Country Status (1)

Country Link
CN (1) CN109509177B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533639B (en) * 2019-08-02 2022-04-15 杭州依图医疗技术有限公司 Key point positioning method and device
CN110517766B (en) * 2019-08-09 2020-10-16 上海依智医疗技术有限公司 Method and device for identifying brain atrophy
CN111524109B (en) * 2020-04-16 2023-07-25 推想医疗科技股份有限公司 Scoring method and device for head medical image, electronic equipment and storage medium
CN111951265B (en) * 2020-08-31 2023-12-01 杭州脉流科技有限公司 Brain stem scoring method and device based on brain CT image, computer equipment and storage medium
CN112075927B (en) * 2020-10-15 2024-05-14 首都医科大学附属北京天坛医院 Etiology classification method and device for cerebral apoplexy
CN113269764A (en) * 2021-06-04 2021-08-17 重庆大学 Automatic segmentation method and system for intracranial aneurysm, sample processing method and model training method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398894A (en) * 2008-06-17 2009-04-01 浙江师范大学 Automobile license plate automatic recognition method and implementing device thereof
CN104834900A (en) * 2015-04-15 2015-08-12 常州飞寻视讯信息科技有限公司 Method and system for vivo detection in combination with acoustic image signal
CN108573226A (en) * 2018-04-08 2018-09-25 浙江大学 The drosophila larvae body segment key independent positioning method returned based on cascade posture

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046227B (en) * 2015-07-24 2018-07-31 上海依图网络科技有限公司 A kind of key frame acquisition methods for portrait video system
CN106599830B (en) * 2016-12-09 2020-03-17 中国科学院自动化研究所 Face key point positioning method and device
CN107909588A (en) * 2017-07-26 2018-04-13 广州慧扬健康科技有限公司 Partition system under MRI cortex based on three-dimensional full convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398894A (en) * 2008-06-17 2009-04-01 浙江师范大学 Automobile license plate automatic recognition method and implementing device thereof
CN104834900A (en) * 2015-04-15 2015-08-12 常州飞寻视讯信息科技有限公司 Method and system for vivo detection in combination with acoustic image signal
CN108573226A (en) * 2018-04-08 2018-09-25 浙江大学 The drosophila larvae body segment key independent positioning method returned based on cascade posture

Also Published As

Publication number Publication date
CN109509177A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN109509177B (en) Method and device for recognizing brain image
US11751832B2 (en) CTA large vessel occlusion model
WO2023221954A1 (en) Pancreatic tumor image segmentation method and system based on reinforcement learning and attention
CN109146879B (en) Method and device for detecting bone age
CN110533676B (en) Tumor image segmentation method and device and terminal equipment
EP3457356B1 (en) Method and apparatus for fully automated segmentation of joint using patient-specific optimal thresholding and watershed algorithm
CN109272002B (en) Bone age tablet classification method and device
CN113436166A (en) Intracranial aneurysm detection method and system based on magnetic resonance angiography data
CN111932533A (en) Method, device, equipment and medium for positioning vertebrae by CT image
CN110400302B (en) Method and device for determining and displaying focus information in breast image
CN109377484A (en) A kind of method and device detecting the stone age
CN112150472A (en) Three-dimensional jaw bone image segmentation method and device based on CBCT (cone beam computed tomography) and terminal equipment
CN109255786B (en) Method and device for detecting bone age
EP3722996A2 (en) Systems and methods for processing 3d anatomical volumes based on localization of 2d slices thereof
CN110751187B (en) Training method of abnormal area image generation network and related product
CN109447963A (en) A kind of method and device of brain phantom identification
CN115546570A (en) Blood vessel image segmentation method and system based on three-dimensional depth network
CN112862835A (en) Coronary vessel segmentation method, device, equipment and computer readable storage medium
CN112150473A (en) Three-dimensional jaw bone image segmentation modeling method and device based on CT and terminal equipment
CN110533120B (en) Image classification method, device, terminal and storage medium for organ nodule
CN115439533A (en) Method, computer device, readable storage medium and program product for obtaining the location of an intracranial aneurysm at a vessel segment
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
CN116862930B (en) Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes
Wu et al. Pneumothorax segmentation in routine computed tomography based on deep neural networks
CN113706541B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant