CN113948190A - Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points - Google Patents

Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points Download PDF

Info

Publication number
CN113948190A
CN113948190A CN202111026339.5A CN202111026339A CN113948190A CN 113948190 A CN113948190 A CN 113948190A CN 202111026339 A CN202111026339 A CN 202111026339A CN 113948190 A CN113948190 A CN 113948190A
Authority
CN
China
Prior art keywords
image
ray
point
model
skull
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111026339.5A
Other languages
Chinese (zh)
Inventor
姚旭峰
赵从义
金宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Medicine and Health Sciences
Original Assignee
Shanghai University of Medicine and Health Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Medicine and Health Sciences filed Critical Shanghai University of Medicine and Health Sciences
Priority to CN202111026339.5A priority Critical patent/CN113948190A/en
Publication of CN113948190A publication Critical patent/CN113948190A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an automatic identification method and equipment for X-ray skull positive position film cephalogram measurement mark points, wherein the method comprises the following steps: acquiring an X-ray skull positive position film image to be identified and preprocessing the image; taking the preprocessed image as the input of the trained recognition model to obtain an automatic recognition result; the identification model is constructed based on an improved YOLOv3 network, and the training process of the identification model comprises the following steps: acquiring a preprocessed X-ray skull positive position sheet image sample set, and marking the position of a mark point in each image; performing data enhancement to obtain a training data set and a test data set; learning by using an improved YOLOv3 network to obtain a converged weight model; and testing the images in the test data set by using the weight model, adjusting the parameters of the weight model, and training again to obtain the final recognition model. Compared with the prior art, the method can accurately and quickly identify the position of the mark point, and reduce the workload of conventional clinical service.

Description

Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points
Technical Field
The invention belongs to the field of computer-aided medicine, and particularly relates to an X-ray skull positive position film cephalogram measurement mark point automatic identification method and equipment based on deep learning.
Background
The X-ray cephalometric technology is respectively proposed by Broadbent and Hofrath in 1931, and is characterized by firstly drawing a measurement tracing graph on an X-ray cephalometric side picture, determining measurement mark points, and then carrying out measurement and analysis on line distances, angles and line distance ratios drawn according to the measurement mark points so as to solve the structural conditions and the mutual relations of soft and hard tissues of cranium, jaw, face and tooth, further understand the mechanism of deformity and help doctors to make correct diagnosis and correction design.
The calibration of the cephalometric marker points requires manual calibration by specially trained clinicians, but manual calibration is still a difficult task, not only is the manual labeling process leading to measurement errors, but also manual calibration is a time-consuming task, requiring on average 15 to 20 minutes for each cranial slice to be processed by the specialist. The fully automatic identification of the cephalometric landmark points will ease the workload of routine clinical service and provide the orthodontist more time to develop an optimal treatment plan. At present, few automatic identification methods for measuring mark points by full-automatic X-ray skull positive position radiography are proposed, and the proposed methods are not accurate enough, highly dependent on data quality and low in robustness, and cannot be used in clinical practice. Therefore, it is necessary to develop an automatic model with fast recognition and high precision.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method and equipment for automatically identifying marked points of X-ray skull positive position film measurement.
The purpose of the invention can be realized by the following technical scheme:
an automatic identification method for X-ray skull positive position film head photograph measurement mark points comprises the following steps:
acquiring an X-ray skull positive position film image to be identified and preprocessing the image;
taking the preprocessed image as the input of the trained recognition model to obtain an automatic recognition result;
the identification model is constructed based on an improved YOLOv3 network, and the training process of the identification model comprises the following steps:
1) acquiring a preprocessed X-ray skull positive position sheet image sample set, and marking the position of a mark point in each image;
2) performing data enhancement on the marked image to obtain a training data set and a test data set;
3) learning the positions of the mark points in the training data set by using an improved YOLOv3 network to obtain a converged weight model;
4) and testing the images in the test data set by using the weight model, adjusting the parameters of the weight model, and training again to obtain the final recognition model.
Further, the preprocessing includes desensitization processing and image transformation.
Further, the marking point positions in each image are specifically:
and (3) performing frame selection and mark classification on the marked points in the image by using marking software for target segmentation to obtain the image with the marked frame and a corresponding marked information file.
Further, the categories of the landmark classifications include a right frontal zygomatic point, a left frontal zygomatic point, a right zygomatic arch point, an anterior nasal spine, a posterior nasal spine, an upper middle incisor point, a lower middle incisor point, a right upper jaw first molar point, a left upper jaw first molar point, and a submental point.
Further, the data enhancement specifically includes:
and rotating the positive bit slice image by 90 degrees, 180 degrees and 270 degrees by using python to realize data amplification and simultaneously modifying the annotation information file.
Further, the size of the labeling box is 5 × 5.
Further, the improved YOLOv3 network comprises a feature extraction part and a prediction part, wherein the prediction part comprises four branches, an SPP module is introduced into a first branch, each branch performs a splicing operation on tensors with different scales to obtain prediction results with different scales, and a final prediction result is obtained based on the prediction results with different scales.
Further, the feature extraction section and the prediction section each include a DBL module including a convolutional layer, a BN layer, and a leakage relu activation layer.
Further, in the testing process of step 4), the detection precision of each marker point is obtained through an euclidean distance error evaluation index, and the parameters of the weight model are adjusted based on the detection precision.
The present invention also provides an electronic device comprising:
one or more processors; a memory; and one or more programs stored in the memory, the one or more programs including instructions for performing the X-ray cranial orthophoto landmark point automatic identification method as described above.
Compared with the prior art, the invention has the following beneficial effects:
1. the method for automatically identifying the X-ray skull positive position film cephalogram measurement mark points utilizes a computer vision algorithm, has the advantages of high identification speed and precision, high efficiency and precision of mark point detection, capability of detecting a plurality of images in one second, convenience in operation, easiness in implementation and the like, and is far superior to manpower.
2. The method for automatically identifying the X-ray skull positive position film cephalogram measurement mark points has higher mark point precision, 10 mark points in the experiment have the precision of 1.37-3.4 mm, the method completely meets the requirement of dental clinic and can help the dental clinic doctors to save time to make an optimal treatment plan.
3. According to the X-ray skull positive position film radiography measurement mark point automatic identification method, in the structural design of model prediction, detection of a small target object, namely a mark point, of a model is guaranteed by adding one-scale detection and fusing multi-scale features, the SPP module is added in front of the first branch, fusion of local features and global features is achieved through the SPP module, the expression capability of a final feature map is enriched, four predictions of different scales are obtained by splicing tensors of different scales, and the detection precision is remarkably improved.
4. The invention realizes the identification of the mark points by utilizing the improved YOLOv3 multi-scale feature extraction detection network, and the structure can make the network structure deeper, extract deep semantic information and improve the detection precision.
5. The basic components of the improved YOLOv3 network comprise a BN layer, the application of the BN layer can accelerate the convergence speed, the data of each layer are converted under the state that the mean value is zero and the variance is 1, so that the data of each layer are distributed in the same training mode and can be converged easily, and gradient explosion and gradient disappearance are prevented.
Drawings
FIG. 1 is a flow chart of automatic identification of marking points for X-ray head positive position film measurement;
FIG. 2 is a schematic diagram of an X-ray skull correction sheet and the position of a mark point;
FIG. 3 is a diagram of a framework of a modified YOLOv3 network structure;
FIG. 4 is an image of a test image after model training and identified by a model.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides an automatic identification method of X-ray skull positive film cephalogram measurement mark points, which comprises the steps of firstly obtaining an X-ray skull positive film image to be identified and preprocessing the X-ray skull positive film image, including desensitization processing, image transformation and the like, and then taking the preprocessed image as the input of a trained identification model to obtain an automatic identification result; wherein the recognition model is constructed based on an improved YOLOv3 network. The method utilizes the improved YOLOv3 multi-scale feature extraction detection network to realize accurate positioning and classification of multiple mark points and target objects with small scale range, and is rapid and high in accuracy.
As shown in fig. 1, in this embodiment, the training process of the recognition model includes the following steps:
step S1, acquiring an X-ray skull positive position film image.
And step S2, preprocessing the X-ray skull positive position film image.
The X-ray skull positive film collected from the hospital generally contains sensitive information such as patient name, birth year and month, and hospital shooting, and the image editing software and the yellow image processing library of python are adopted to erase the sensitive information in this embodiment.
Since the size of the model input picture is required, the correction slice needs to be adjusted to a proper size. In this embodiment, a scroll image processing library is used to convert the positive bit size to 608 x 608.
And step S3, marking the position of the mark point, and using the marking software of the target segmentation to perform frame selection and mark classification on the mark point in the image to obtain the image with the mark frame and the corresponding mark information file.
In this embodiment, a professional dental clinician manually marks the mark points of the converted positive position film by using the LabelImg labeling software for target segmentation, and marks a sphenoid saddle point, a right frontal zygomatic point, a left frontal zygomatic point, a right zygomatic point, an anterior nasal spine, a posterior nasal spine, an upper middle incisor point, a lower middle incisor point, a right upper jaw first molar point, a left upper jaw first molar point, and a submental point according to clinical requirements, wherein the total number of the mark points is 10, and as shown in fig. 2, the pixel size of each mark point is fixed to be 5 × 5.
And step S4, performing data enhancement on the marked positive bit slice image and manufacturing a training data set and a test data set.
In this embodiment, 160 positive bit slice images are rotated by a pilot image processing library by 90 °, 180 °, and 270 °, so as to achieve data amplification, and the label information file is also correspondingly processed, so as to obtain 640 positive bit slice data sets. In this example, 480 sheets were used as a training set, and 160 sheets were used as a test set.
Step S5, constructing a recognition model based on the improved YOLOv3 network, and learning, specifically: and (3) learning the position of the mark point in the X-ray skull positive bit map image data by using an improved YOLOv3 network, and fitting the data by using the deep-learning multilayer convolution to obtain a converged weight model. And testing the X-ray skull positive position sheet of the test data set by using the learned weight model, and then retraining after adjusting the parameters to obtain a final recognition model.
The improved YOLOv3 is an end-to-end target detection model, as shown in fig. 3, and its basic idea is: firstly, extracting features from input features through a feature extraction network to obtain feature map output with a specific size. In this embodiment, the first feature extraction part, that is, the improved YOLOv3 backbone structure, is formed by a Darknet-53 network, which uses the residual structure of ResNet for reference, and has 53 layers of convolutional networks, which contain 23 residual blocks, and by using this structure, the network structure can be made deeper, and the deep semantic information can be extracted, including more underlying features of each mark point and the category corresponding to the mark point feature. Wherein, 5 resX structures in the network structure diagram, the basic component of which is also DBL, are composed of a layer of convolution, a layer of BN and a layer of LeakyReLU activation layer, and the step length of 5 convolutions in Darknet-53 is 2. After 5 times of reduction, the feature map is reduced to 1/32 of the original input size. And secondly, a prediction part consists of four branches, four predictions with different scales are obtained by performing splicing operation (concat) on tensors with different scales, an SPP module is added in front of the first branch to obtain a larger receptive field, four outputs with different scales correspond to four convolutional layers, and one branch is added to obtain more bottom layer characteristics.
In the testing process, the detection precision of each mark point is obtained through the Euclidean distance error evaluation index, and the parameters of the weight model are adjusted based on the detection precision. In this embodiment, 160 positive slices in the test set are identified, and the precision of the mark points is evaluated by the euclidean distance error, with the following formula:
Figure BDA0003243639940000051
d represents the average of the predicted mark point position and the actual mark point positionMean error, x1Abscissa, y, representing predicted index point1Ordinate, x, representing predicted index point0Abscissa, y, representing the actual index point0Indicating the ordinate of the actual marker point and N indicating the number of predicted side pieces. And finally, the precision of 10 mark points is within the range of 1.37-3.4 mm.
In the preferred embodiment, after the model test is stable, the model is optimized by the optimization scheme, so that the precision is improved. In this embodiment, by comparing the accuracies, it is finally determined that the hyper-parameters of the model are set as: the batch size was set to 8, the number of iterations was set to 2000, and the learning rate was set to 0.001.
And step S6, automatically recognizing the mark points by using the trained recognition model. The final recognition diagram of the present embodiment is shown in fig. 4.
The method comprises the steps of constructing an improved YOLOv3 network model to serve as an orthotopic cephalogram measurement mark point identification model, selecting proper image data to serve as a training set and a verification set of the initial orthotopic cephalogram measurement mark point identification model, training and verifying the initial orthotopic cephalogram measurement mark point identification model through the training set and the verification set, and taking an optimal model as a final orthotopic cephalogram measurement mark point identification model.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another embodiment, an electronic device is provided that includes one or more processors, memory, and one or more programs stored in the memory, the one or more programs including instructions for performing the X-ray cranial orthopedics head shadow measurement landmark automatic identification method as described above.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. An automatic identification method for X-ray skull positive position film head photograph measurement mark points is characterized by comprising the following steps:
acquiring an X-ray skull positive position film image to be identified and preprocessing the image;
taking the preprocessed image as the input of the trained recognition model to obtain an automatic recognition result;
the identification model is constructed based on an improved YOLOv3 network, and the training process of the identification model comprises the following steps:
1) acquiring a preprocessed X-ray skull positive position sheet image sample set, and marking the position of a mark point in each image;
2) performing data enhancement on the marked image to obtain a training data set and a test data set;
3) learning the positions of the mark points in the training data set by using an improved YOLOv3 network to obtain a converged weight model;
4) and testing the images in the test data set by using the weight model, adjusting the parameters of the weight model, and training again to obtain the final recognition model.
2. The method for automatically identifying the X-ray cranial positive slice cephalometric marker points according to claim 1, characterized in that said pre-processing comprises desensitization processing and image transformation.
3. The method for automatically identifying marking points for X-ray cranial positive position film cephalometric measurement according to claim 1, characterized in that the marking point positions in each image are specifically:
and (3) performing frame selection and mark classification on the marked points in the image by using marking software for target segmentation to obtain the image with the marked frame and a corresponding marked information file.
4. The method according to claim 3, wherein the categories of the marker classification include right frontal zygomatic point, left frontal zygomatic point, right zygomatic arch point, anterior nasal spine, posterior nasal spine, upper middle incisor point, lower middle incisor point, right upper jaw first molar point, left upper jaw first molar point and submental point.
5. The method for automatically identifying the X-ray cranial positive slice cephalometric measurement marker points according to claim 3, wherein the data enhancement specifically is:
and rotating the positive bit slice image by 90 degrees, 180 degrees and 270 degrees by using python to realize data amplification and simultaneously modifying the annotation information file.
6. The method for automatically identifying the marked points of X-ray cranial orthophoto cephalometric measurement according to claim 3, wherein the size of the labeling box is 5X 5.
7. The method for automatically identifying the X-ray cranial orthophoria measuring mark point according to claim 1, wherein the improved YOLOv3 network comprises a feature extraction part and a prediction part, the prediction part comprises four branches, an SPP module is introduced into the first branch, each branch performs a splicing operation on tensors with different scales to obtain prediction results with different scales, and a final prediction result is obtained based on the prediction results with different scales.
8. The method according to claim 7, wherein the feature extraction part and the prediction part each include a DBL module including a convolutional layer, a BN layer, and a leakage relu activation layer.
9. The method for automatically identifying marked points in X-ray cranial positive mapping measurement according to claim 1, wherein in the testing process of step 4), the detection precision of each marked point is obtained through Euclidean distance error evaluation indexes, and the parameters of the weight model are adjusted based on the detection precision.
10. An electronic device, comprising:
one or more processors;
a memory; and
one or more programs stored in the memory, the one or more programs including instructions for performing the method for automatic identification of X-ray cranial orthophoria measurement landmark points according to any of claims 1-9.
CN202111026339.5A 2021-09-02 2021-09-02 Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points Pending CN113948190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111026339.5A CN113948190A (en) 2021-09-02 2021-09-02 Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111026339.5A CN113948190A (en) 2021-09-02 2021-09-02 Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points

Publications (1)

Publication Number Publication Date
CN113948190A true CN113948190A (en) 2022-01-18

Family

ID=79327845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111026339.5A Pending CN113948190A (en) 2021-09-02 2021-09-02 Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points

Country Status (1)

Country Link
CN (1) CN113948190A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797341A (en) * 2023-01-16 2023-03-14 四川大学 Method for automatically and immediately judging natural head position of skull side position X-ray film
CN115797730A (en) * 2023-01-29 2023-03-14 有方(合肥)医疗科技有限公司 Model training method and device, and head shadow measurement key point positioning method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083922A (en) * 2018-08-21 2020-04-28 数码牙科集线 Dental image analysis method for correction diagnosis and apparatus using the same
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN113241155A (en) * 2021-03-31 2021-08-10 正雅齿科科技(上海)有限公司 Method and system for acquiring mark points in lateral skull tablet

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083922A (en) * 2018-08-21 2020-04-28 数码牙科集线 Dental image analysis method for correction diagnosis and apparatus using the same
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN113241155A (en) * 2021-03-31 2021-08-10 正雅齿科科技(上海)有限公司 Method and system for acquiring mark points in lateral skull tablet

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JI-HOON PARK 等: "Automated identification of cephalometric landmarks:Part 1—Comparisons between the latest deep-learning methods YOLOV3 and SSD", 《ORIGINAL ARTICLE》 *
豆世豪: "改进YOLOv3的道路场景目标检测方法", 《电脑知识与技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797341A (en) * 2023-01-16 2023-03-14 四川大学 Method for automatically and immediately judging natural head position of skull side position X-ray film
CN115797730A (en) * 2023-01-29 2023-03-14 有方(合肥)医疗科技有限公司 Model training method and device, and head shadow measurement key point positioning method and device

Similar Documents

Publication Publication Date Title
Wang et al. Automatic analysis of lateral cephalograms based on multiresolution decision tree regression voting
CN110390674B (en) Image processing method, device, storage medium, equipment and system
CN113948190A (en) Method and equipment for automatically identifying X-ray skull positive position film cephalogram measurement mark points
CN112614133B (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN110246580B (en) Cranial image analysis method and system based on neural network and random forest
CN112884060A (en) Image annotation method and device, electronic equipment and storage medium
US11980491B2 (en) Automatic recognition method for measurement point in cephalo image
CN111967539B (en) Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
Vera et al. Artificial intelligence techniques for automatic detection of peri-implant marginal bone remodeling in intraoral radiographs
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
CN113450306B (en) Method for providing fracture detection tool
CN111967540B (en) Maxillofacial fracture identification method and device based on CT database and terminal equipment
CN115578373A (en) Bone age assessment method, device, equipment and medium based on global and local feature cooperation
CN115100180A (en) Pneumonia feature identification method and device based on neural network model and electronic equipment
CN114359133A (en) Hand bone image analysis method based on feature extraction and related equipment
CN116188879B (en) Image classification and image classification model training method, device, equipment and medium
CN117746167B (en) Training method and classifying method for oral panorama image swing bit error classification model
Madan et al. Unboxing the blackbox-Visualizing the model on hand radiographs in skeletal bone age assessment
Millan-Arias et al. General Cephalometric Landmark Detection for Different Source of X-Ray Images
CN117476219B (en) Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis
CN113781453B (en) Scoliosis advancing and expanding prediction method and device based on X-ray film
CN115797341B (en) Method for automatically and immediately judging natural head position of skull side position X-ray film
WO2021128230A1 (en) Deep learning-based medical image processing method and system, and computer device
Vera González et al. Artificial Intelligence Techniques for Automatic Detection of Peri‑implant Marginal Bone Remodeling in Intraoral Radiographs
Matthieu et al. Image augmentation and automated measurement of endotracheal-tube-to-carina distance on chest radiographs in intensive care unit using a deep learning model with external validation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination