CN114638800A - Improved Faster-RCNN-based head shadow mark point positioning method - Google Patents

Improved Faster-RCNN-based head shadow mark point positioning method Download PDF

Info

Publication number
CN114638800A
CN114638800A CN202210246266.9A CN202210246266A CN114638800A CN 114638800 A CN114638800 A CN 114638800A CN 202210246266 A CN202210246266 A CN 202210246266A CN 114638800 A CN114638800 A CN 114638800A
Authority
CN
China
Prior art keywords
point
detection
rcnn
points
mark point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210246266.9A
Other languages
Chinese (zh)
Inventor
刘侠
谢林浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202210246266.9A priority Critical patent/CN114638800A/en
Publication of CN114638800A publication Critical patent/CN114638800A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

A head shadow mark point positioning method based on improved Faster-RCNN, the head shadow measurement is an indispensable analytical means of orthodontic treatment process, wherein the machine learning method has outstanding advantages in the aspect of medical image analysis, the learning-based method is not influenced by the initiative of an observer, so that the positioning accuracy is greatly improved, but the accuracy depends on the quantity and the accuracy of training data to a great extent; the invention provides a trunk network which divides an image into a plurality of small areas, extracts characteristic values through convolution kernels, performs down-sampling through a pooling layer, compresses the image under the condition of not influencing the image quality, integrates the characteristics through a full connection layer, obtains the high-level meaning of image characteristics, improves the detection accuracy, eliminates abnormal points by utilizing the thought of K-Means + +, reduces the false detection probability, improves the accuracy while ensuring the detection stability, can realize the automatic detection of X-ray film marking points at the lateral position of the skull, and can be clinically applied.

Description

Improved Faster-RCNN-based head shadow mark point positioning method
Technical Field
The invention belongs to the field of medical treatment, and particularly relates to a method for improving high-precision automatic positioning skull-side X-ray film anatomical marking points of fast-RCNN.
Background
The cephalogram measurement is an indispensable analysis means in the orthodontic and orthognathic diagnosis and treatment processes. With the development of computer-aided technology, the automatic positioning of the cephalogram measurement is basically realized in the two-dimensional cephalogram measurement, and higher accuracy is achieved, so that the burden of an operator is greatly reduced; the Cone Beam Computed Tomography (CBCT) image has no defects of amplification distortion, tissue overlapping and the like, can accurately position anatomical markers for cephalogram measurement and analysis, has natural advantages for diagnosing and analyzing congenital or acquired craniofacial asymmetric deformity, and the three-dimensional cephalogram measurement automatic positioning becomes an important research direction in the field of cephalogram measurement at present.
However, the existing automatic two-dimensional head shadow measurement fixed-point method has the problem of insufficient popularization, related software such as Dolphin, WinCeph and Uceph are expensive and are mostly used in large hospitals, and the practical situation is that the input of most of small and medium-sized hospital marking points still stays at the level of manual positioning, so that the workload is large, the subjective factor is strong, and human errors inevitably occur, and the measurement accuracy and the prediction reliability are directly influenced. And the marking points belong to small-size targets, the marking points lack appearance information for distinguishing the marking points from the background, characteristic information is easily lost in a deep convolutional neural network, and the conditions of missing detection and false detection are easy to occur during detection. Therefore, further improvement of the accuracy and stability of two-dimensional cephalogram measurement is still important to be studied.
Disclosure of Invention
The invention aims to provide a method for positioning the head shadow mark points by improving the Faster-RCNN, aiming at the problems that the accuracy and the stability of the automatic fixed point of the current two-dimensional head shadow measurement are low, and the manual positioning of the mark points of the two-dimensional head shadow by human has strong subjective influence.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a method for positioning a head shadow mark point based on improved Faster-RCNN specifically comprises the following steps:
step 1: enhancing and denoising an original two-dimensional skull side X-ray film, and adding mark point category information into a label file;
step 2: dividing the processed data set into a training set, a verification set and a test set according to a proportion;
and step 3: performing a CNN part, fusing FPN and ResNet, acquiring strong semantic information and position information by the FPN in a bidirectional fusion mode, and performing feature extraction operation on the enhanced image by the ResNet;
and 4, step 4: traversing the whole feature map by the RPN to generate anchors to map back to the original image, outputting accurate ROIs according to NMS, fixing the input dimension of the full connection layer by using ROI posing through the convolutional layer feature map, and mapping the ROIs output by the RPN to the feature map of ROIploling to perform bbox regression and classification;
and 5: adding a mixed attention mechanism into ResNet50FPN and RPN to improve the detection performance of the model;
and 6: and after the detection model obtains an output image, eliminating abnormal value points by using a K-Means + + algorithm, and realizing detection optimization of the mark points.
The invention has the beneficial effects that:
according to the method for positioning the head shadow mark points based on the improved Faster-RCNN, on one hand, the category information of the mark points is added into a label file, so that the learning capability and the generalization capability of a model are improved, and the detection precision of the model is improved; on the other hand, the FPN network and the ResNet network are combined, so that the influence of high-resolution features on detection is reduced when upsampling is carried out, the feature extraction capability of a main network is improved, and a feature diagram with channel features and spatial features is obtained by adding a mixed attention mechanism; and processing the output image based on a K-Means + + algorithm, establishing an optimization link of the model, and detecting and eliminating abnormal value points, thereby reducing the false detection rate of the model, ensuring the stable operation of the model and improving the detection precision of the model.
Drawings
FIG. 1 is a flow chart of a method for locating a head shadow mark point based on the improved Faster-RCNN;
FIG. 2 is a diagram of an improved Faster-RCNN backbone network;
FIG. 3 is a flow chart based on the K-Means + + algorithm.
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings and examples.
One embodiment of the invention: a method for positioning a head shadow mark point based on improved Faster-RCNN is disclosed, as shown in FIG. 1, the contents of which include the following steps:
step 1: and preprocessing the original input image, wherein the preprocessing content comprises data enhancement, Gaussian filtering noise elimination and adding the mark point category information into the label file. Firstly, the picture is cut to the size of 600 × 800 in batches, then the image is subjected to random rotation, horizontal turnover and vertical turnover, the diversity of the picture is increased, then the interference of the picture caused by nonuniform light is reduced through normalization processing, after data enhancement, a data set becomes easy to learn, a higher fitting degree can be achieved, secondly, Gaussian filtering is utilized to carry out weighted average on the whole image, the value of each pixel point is obtained by the weighted average of the pixel point and other pixel values in the neighborhood, and the calculation formula of a two-dimensional Gaussian function is as follows:
Figure BDA0003544780630000031
wherein sigma represents the standard deviation, x, y represent the coordinate value, according to orthodontics and coordinate information, divide the marking point into 32 kinds, and add the information of the kind into the label file.
Step 2: the backbone network of the Faster R-CNN is to select ResNet50 and FPN network to combine to form ResNet50FPN, as shown in FIG. 2, the "bottom-up" path is the feed-forward calculation of the convolution network, and the calculation is differentThe feature maps are formed by proportional feature mapping, the scaling step size of the feature maps is 2, conv1, Res2, Res3, Res4 and Res5 are convolution blocks of ResNet, the output feature maps are C2, C3, C4 and C5, the results of the feature maps in the top-down path after being subjected to 1 × 1 convolution transformation are added and combined with the up-sampling results of the upper layer, and 3 × 3 convolution is used for eliminating aliasing effects of the combined results M2, M3, M4 and M5, so that the influence of high-resolution features on detection is relieved by up-sampling the higher-layer feature maps which are more abstract and have stronger semantics; the main network is responsible for extracting features from an input image to obtain feature maps, a channel and space attention mechanism is added, the RPN area network is responsible for generating candidate areas from the input feature map through rough screening, the ResNet50 uses a mixed attention module and comprises channel area attention and space area attention, and the CBAM structure is used for obtaining features M after a feature map F passes through the channel attention modulecAnd performing element-by-element multiplication operation with the F to obtain a feature map F', wherein the calculation formula of the attention of the channel domain is as follows:
Figure BDA0003544780630000032
wherein c represents the c-th convolution kernel, and sigma represents a Sigmoid function; MLP denotes a multi-layer perceptron for sharing parameters; AvgPool () represents an average pooling operation, and MaxPool () represents a maximum pooling operation;
Figure BDA0003544780630000033
and
Figure BDA0003544780630000034
respectively representing the features of the global average pooling and the global maximum pooling output, and obtaining a feature M through a space attention modulesThe element-by-element multiplication operation is carried out on the feature diagram F ' to obtain a feature diagram F ' and the addition operation is carried out on the feature diagram F ' and the feature diagram F ' to obtain F ', and the calculation formula of the spatial domain attention is as follows:
Figure BDA0003544780630000035
wherein S represents a specific spatial domain, MsThe generated spatial attention feature is represented and,
Figure BDA0003544780630000036
the convolution layer uses 7 × 7 convolution kernels, H and W represent the height and width of a characteristic diagram respectively, the FPN network adopts a bidirectional fusion structure, the structure can simultaneously obtain stronger semantic information and position information, after the input picture is subjected to feature extraction by ResNet, the method comprises the steps of sampling a high-level feature map, adding the high-level feature map, obtaining a final feature map in a bidirectional fusion mode of adding the low-level feature map and the high-level feature map after down-sampling the low-level feature map, splicing the feature maps by utilizing a splicing layer, keeping the size of the fused feature map unchanged, inputting suggestion frames generated by feature maps and an RPN network into an ROI layer for pooling operation, enabling each ROI to generate a feature map with a fixed size, finally obtaining regression parameters through a full connection layer, finely adjusting the suggestion frames, decoding to obtain a prediction frame at the position of an original image, and directly drawing the prediction frame on the original image so as to obtain a mark point detection frame.
And 3, step 3: the method is characterized in that a detection frame is finally drawn on an original image by the Faster R-CNN network, so that a doctor cannot observe a marked point intuitively, a label of an original data set is a set of point coordinates, direct training has no significance, and the characteristic of one pixel point is very limited, so that the point coordinates can be converted into frame coordinates in batches by using a script file, the point coordinates in all label files are converted into frame coordinates. The IOU is the ratio of the intersection and the union of the second step prediction frame and the original mark frame, the threshold value is set to be 0.5, the detection frame with the IOU larger than 0.5 is used as a positive sample and judged as a target to be detected, the detection frame with the IOU smaller than 0.5 is used as a negative sample and judged as 0, then the central point of the detection frame of the positive sample is used as a detection point, then the central point of the detection frame is marked by a white mark point, the problem of false detection obviously occurs according to the output result of the graph 1, if two points are detected from the front chin point, one more point is detected between the upper lip edge point and the lower lip edge point.
And 4, step 4: dividing the detection points into K clusters according to the distance between the detection points by adopting a K-Means + + algorithm, wherein the value of K can refer to a label file, namely, the value of K can refer to x coordinate points, and then K is taken as x, so that K clusters of initial centroids exist in the image, and the algorithm flow chart is shown in figure 3;
firstly, randomly generating a centroid, and generating K centroids one by one according to the principle that the centroids are as far as possible;
secondly, solving all detection points in the graph, solving the distances between the detection points and K clustering centers, classifying the detection points into the clustering of the center with the minimum distance, and iterating for n times in the way;
thirdly, updating the central point of each cluster by using methods such as mean value and the like in each iteration process;
fourthly, after the K clustering centers are iteratively updated by the second and third steps, if the position point changes little (a threshold value can be set), the stable state is considered to be reached, and the iteration is finished;
and finally, selecting a detection point closest to the mass center as a prediction point, and screening out other points to improve the detection precision.

Claims (7)

1. A method for positioning a head shadow mark point based on improved Faster-RCNN is characterized in that the method for detecting the mark point comprises the following steps:
the method comprises the following steps: the preprocessing part is used for enhancing and denoising the image and classifying the mark points;
step two: dividing a data set into a training set, a verification set and a test set, wherein the proportion can be set to be 7:2: 1;
step three: the model backbone network refers to ResNet and FPN, combines to form ResNet50FPN, and is responsible for extracting features from input data and outputting the features as a feature map set;
step four: the candidate region network RPN is responsible for extracting a candidate region from the feature map;
step five: providing a channel attention and space attention mechanism for the backbone network and the candidate area network;
step six: and dividing the detection points into K according to the distance between the detection points by adopting a K-Means + + algorithm to the output image, wherein the K takes the value of the coordinate point of the reference label file.
2. The improved fast-RCNN-based method for locating the head shadow mark point according to claim 1, wherein in the first step, the enhancement and denoising are performed on the X-ray image at the cranial position, and the noise is eliminated through gaussian filtering, and a two-dimensional gaussian function calculation formula is as follows:
Figure FDA0003544780620000011
in the formula, sigma represents a standard deviation, the value is 1, and the image smoothing effect is obvious;
the dentognathic facial mark points can be divided into four major categories, namely 32 mark points including a cranium mark point (5), an upper jaw mark point (7), a lower jaw mark point (9) and a soft tissue side mark point (11), and are classified according to the coordinate positions of the mark points, so that the learning capability and the generalization capability of the model are improved.
3. The improved Faster-RCNN-based method for locating the head shadow mark points according to claim 1, wherein the data set in the second step is divided into a training set, a validation set and a testing set, wherein the training set is used for model fitting data samples; the verification set is a sample set which is set aside in the model training process, can be used for carrying out preliminary evaluation on the super-parameters of the adjustment model and the capability of the adjustment model, and is usually used for verifying the generalization capability of the current model during the iterative training of the model so as to decide whether to stop the training; the test set may be used to evaluate the final generalization capability of the model.
4. The improved Faster-RCNN-based method for locating the position of the head shadow mark point as claimed in claim 1, wherein the FPN network in the third step has two paths:
one path is the feedforward calculation of the convolution network, a characteristic diagram formed by characteristic mapping with different proportions is calculated, the scaling step length is 2, and the step distances of ResNet rolling blocks are respectively set to be 2,4,6,8 and 12;
the other path reduces the influence of high-resolution features on detection by up-sampling a high-level feature map which is more abstract and has stronger semantics, and the result obtained by combining the two paths is subjected to convolution of 3 x 3, so that the aliasing effect caused by up-sampling is reduced.
5. The improved fast-RCNN-based method for locating the head shadow mark points according to claim 1, wherein RPN in the fourth step is firstly convolved by 3 × 3, in order to better fuse the information around each point, each point has 9 anchors, each point needs to be divided into a frame for and a back with 2 dimensions, so each point outputs W × H18, and during the regression, four frames are needed, which are respectively 4 dimensions, i.e. 4k, and W × H36, and then the anchors are sorted according to input generalized software cameras, so as to find the optimal anchors among 2000 anchors, and then the anchors are mapped back to the original image, and the tops are extracted as the result output proposal according to the NMS sorting from large to small.
6. The improved fast-RCNN-based head shadow mark point positioning method according to claim 1, wherein in the fifth step, a mixed attention mechanism is added to the backbone network and the candidate area network, wherein the inclusion and MobileNet networks are used for reference by the channel domain attention idea, and the channel domain attention calculation formula is as follows:
Figure FDA0003544780620000021
wherein c represents the c-th convolution kernel, and sigma represents a Sigmoid function; MLP denotes a multi-layer perceptron for sharing parameters; AvgPool () represents an average pooling operation, and MaxPool () represents a maximum pooling operation;
Figure FDA0003544780620000022
and
Figure FDA0003544780620000023
the features representing the global average pooling and global maximum pooling outputs, respectively, are calculated as follows:
Figure FDA0003544780620000024
wherein S represents a specific spatial domain, MsThe generated spatial attention feature is represented and,
Figure FDA0003544780620000025
convolution layers use 7 × 7 convolution kernels, H and W represent the height and width of the feature map, respectively.
7. The improved fast-RCNN-based method for locating the position of the head shadow mark point as claimed in claim 1, wherein the K-Means + + algorithm is adopted in the sixth step to reduce the false detection rate of the detected mark point and improve the detection accuracy of the model, and the implementation process includes the following steps:
(1) randomly generating a mass center, and generating K mass centers one by one according to the principle that the mass centers are as far as possible;
(2) calculating all detection points in the graph, calculating the distances between the detection points and K clustering centers, classifying the detection points into the clustering of the center with the minimum distance, and iterating for n times in this way;
(3) in each iteration process, updating the central point of each cluster by using methods such as mean value and the like;
(4) for K clustering centers, after iteration updating by using the steps of 2 and 3, if the change of the position point is very small (a threshold value can be set), the stable state is considered to be reached, and the iteration is finished;
(5) and selecting a detection point closest to the centroid as a prediction point, and screening out other points.
CN202210246266.9A 2022-03-14 2022-03-14 Improved Faster-RCNN-based head shadow mark point positioning method Pending CN114638800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210246266.9A CN114638800A (en) 2022-03-14 2022-03-14 Improved Faster-RCNN-based head shadow mark point positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210246266.9A CN114638800A (en) 2022-03-14 2022-03-14 Improved Faster-RCNN-based head shadow mark point positioning method

Publications (1)

Publication Number Publication Date
CN114638800A true CN114638800A (en) 2022-06-17

Family

ID=81948736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210246266.9A Pending CN114638800A (en) 2022-03-14 2022-03-14 Improved Faster-RCNN-based head shadow mark point positioning method

Country Status (1)

Country Link
CN (1) CN114638800A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115345938A (en) * 2022-10-18 2022-11-15 汉斯夫(杭州)医学科技有限公司 Global-to-local-based head shadow mark point positioning method, equipment and medium
WO2024016575A1 (en) * 2022-07-22 2024-01-25 重庆文理学院 Cbam mechanism-based residual network medical image auxiliary detection method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024016575A1 (en) * 2022-07-22 2024-01-25 重庆文理学院 Cbam mechanism-based residual network medical image auxiliary detection method
CN115345938A (en) * 2022-10-18 2022-11-15 汉斯夫(杭州)医学科技有限公司 Global-to-local-based head shadow mark point positioning method, equipment and medium

Similar Documents

Publication Publication Date Title
CN111524106B (en) Skull fracture detection and model training method, device, equipment and storage medium
CN106340021B (en) Blood vessel extraction method
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN114638800A (en) Improved Faster-RCNN-based head shadow mark point positioning method
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN110853011B (en) Method for constructing convolutional neural network model for pulmonary nodule detection
CN111507932B (en) High-specificity diabetic retinopathy characteristic detection method and storage device
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
US20220383661A1 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN114708255B (en) Multi-center children X-ray chest image lung segmentation method based on TransUNet model
CN112700461B (en) System for pulmonary nodule detection and characterization class identification
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
CN104281856B (en) For the image pre-processing method and system of brain Medical Images Classification
CN111681230A (en) System and method for scoring high-signal of white matter of brain
CN111127400A (en) Method and device for detecting breast lesions
CN108416304B (en) Three-classification face detection method using context information
CN112102282A (en) Automatic identification method for lumbar vertebrae with different joint numbers in medical image based on Mask RCNN
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN114359288A (en) Medical image cerebral aneurysm detection and positioning method based on artificial intelligence
CN112036298A (en) Cell detection method based on double-segment block convolutional neural network
CN116309806A (en) CSAI-Grid RCNN-based thyroid ultrasound image region of interest positioning method
CN115423806B (en) Breast mass detection method based on multi-scale cross-path feature fusion
CN112258536A (en) Integrated positioning and dividing method for corpus callosum and lumbricus cerebellum
CN112258532A (en) Method for positioning and segmenting corpus callosum in ultrasonic image
CN112634226B (en) Head CT image detection device, method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination