CN111967539B - Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment - Google Patents

Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment Download PDF

Info

Publication number
CN111967539B
CN111967539B CN202011046290.5A CN202011046290A CN111967539B CN 111967539 B CN111967539 B CN 111967539B CN 202011046290 A CN202011046290 A CN 202011046290A CN 111967539 B CN111967539 B CN 111967539B
Authority
CN
China
Prior art keywords
fracture
cbct
image block
model
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011046290.5A
Other languages
Chinese (zh)
Other versions
CN111967539A (en
Inventor
贺洋
揭璧朦
徐子能
张益�
仝雁行
彭歆
丁鹏
白海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deepcare Information Technology Co ltd
Peking University School of Stomatology
Original Assignee
Beijing Deepcare Information Technology Co ltd
Peking University School of Stomatology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deepcare Information Technology Co ltd, Peking University School of Stomatology filed Critical Beijing Deepcare Information Technology Co ltd
Priority to CN202011046290.5A priority Critical patent/CN111967539B/en
Publication of CN111967539A publication Critical patent/CN111967539A/en
Application granted granted Critical
Publication of CN111967539B publication Critical patent/CN111967539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing, and provides a recognition method of maxillofacial fracture based on a CBCT database, which comprises the following steps: decomposing the jaw face CBCT data to be identified into image block sequences of different anatomical regions according to an anatomical structure; inputting each image block in the image block sequence into a corresponding trained fracture judgment model respectively to obtain a fracture judgment result sequence of the image block; and determining that the fracture judgment result sequence meets fracture judgment conditions, and outputting corresponding image blocks and fracture positions. Meanwhile, a corresponding recognition device for the maxillofacial fracture based on the CBCT database and a terminal device are also provided. The embodiment provided by the invention is suitable for identifying the fracture in the CBCT image in the medical image, and improves the efficiency.

Description

Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
Technical Field
The invention relates to the field of image processing, in particular to a recognition method of maxillofacial fracture based on a CBCT database, a recognition device of maxillofacial fracture based on the CBCT database, a terminal device and a corresponding storage medium.
Background
Maxillofacial fracture is a common trauma type in accidents such as traffic accidents, accidental injuries, competitive sports and the like. The anatomical structures of the jaw and face fracture are complex and various, the diagnosis difficulty only depending on clinical symptoms and physical signs is large, and the auxiliary diagnosis of the imaging is often needed. The three-dimensional image of the Cone Beam CT (CBCT) can more clearly and intuitively represent the position and the displacement direction of the fracture, has the advantages of low radiation dose, high spatial resolution, convenient application and the like which are not possessed by the traditional CT, and is widely applied to operations such as maxillofacial tumor, plastic surgery and the like and auxiliary examination of arthropathy in recent years. The advantages also enable the CBCT to have better application prospect in diagnosis and treatment of the field trauma emergency treatment of large-scale activities such as the winter Olympic Games and the like. However, the CBCT image contains a lot of information, and it is difficult to fully and accurately evaluate details such as bone structures, boundaries, and hidden fracture lines only through a visual inspection by a clinician. Therefore, the traditional diagnosis and treatment mode has great difficulty in treating the large-scale events and emergency scene which are characterized by 'rapidness, accuracy and high efficiency'.
In recent years, deep learning techniques have been gradually applied to the medical field, and have good effects in detecting diseases such as cancer, cataract, fracture, cerebral hemorrhage, and the like. The Convolutional Neural Network (CNN) is the most advanced technology in medical image diagnosis, the defects of missed diagnosis and misdiagnosis of human eye diagnosis are made up by high accuracy and stability, and the accuracy of classification of diseases such as pulmonary tuberculosis, pulmonary nodule CT images, breast cancer, brain lesion, cataract classification and the like is proved to reach the level of human experts.
The existing CBCT image-based maxillofacial fracture identification technology is characterized in that a professional doctor in an imaging department performs manual judgment by means of related software (such as mimics research 19.0), wherein the judgment comprises maxilla fracture, zygomatic bone fracture, mandibular angle and lifting fracture, alveolar fracture, chin fracture, condylar fracture and coracoid fracture, the fracture types are complex, and the fracture manual diagnosis efficiency is low.
Disclosure of Invention
In view of the above, the present invention is directed to a method, an apparatus and a terminal device for identifying a maxillofacial fracture based on a CBCT database, so as to at least partially solve the above problems. The invention is based on a convolutional neural network algorithm, carries out deep learning training on a CBCT data model of the maxillofacial fracture, tests set model verification, and learns artificial intelligence to assist in diagnosing frostbite and maxillofacial wound by means of human experience, thereby forming an intelligent diagnosis platform and improving the stability and reaction efficiency of disease diagnosis and treatment. The invention solves the efficiency problem of jaw face fracture identification, and assists doctors to improve diagnosis efficiency by automatically detecting the fracture position.
In a first aspect of the present invention, there is provided a method for identifying a maxillofacial fracture based on a CBCT database, the method comprising: decomposing the jaw face CBCT data to be identified into image block sequences of different anatomical regions according to an anatomical structure; inputting each image block in the image block sequence into a corresponding trained fracture judgment model respectively to obtain a fracture judgment result sequence of the image block; and determining that the fracture judgment result sequence meets fracture judgment conditions, and outputting corresponding image blocks and fracture positions.
Optionally, decomposing the maxillofacial CBCT data to be identified into image block sequences of different anatomical regions according to the anatomical structure includes: inputting the CBCT image sequence of the CBCT data into a trained bone structure positioning model; generating a detection rectangular frame sequence; positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions; and respectively extracting the image block sequence of each anatomical region layer by layer from the CBCT image sequence in the positioned fault range.
Optionally, the bone structure positioning model is a convolutional neural network model, the input of the convolutional neural network model is a CBCT image sequence of the CBCT data, and the output is a rectangular frame detection result corresponding to each image in the CBCT image sequence.
Optionally, the trained bone structure localization model is obtained through the following steps: marking CBCT data with maxillofacial fracture characteristics and marking the CBCT data according to anatomical structures to serve as a first training sample; dividing the first training sample into a training set and a validation set; after the parameters of the bone structure positioning model are preset, iterative training is carried out on the bone structure positioning model by adopting the training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the bone structure positioning model by adopting the training samples in the verification set.
Optionally, before the dividing the training samples into the training set and the verification set, the method further includes: and mapping the pixel gray value of the CBCT image sequence to a preset pixel gray value range.
Optionally, the fracture discrimination model is: one of EfficientNet 3, ResNet, DenseNet, and 3D-ResNet.
Optionally, the trained fracture discrimination model is obtained through the following steps: determining a model structure of the fracture discrimination model; decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample; the fracture discrimination model of each different anatomical region is trained by adopting the following steps: dividing the partition training samples corresponding to the anatomical region into a training set and a verification set; after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set.
Optionally, determining that the fracture judgment result sequence satisfies the fracture judgment condition, and outputting a corresponding image block and a corresponding fracture position, including: acquiring the number and the position relation of fracture judgment results in the fracture judgment result sequence, wherein the fracture judgment results are 'fractures'; and determining that the relationship between the number and the position accords with fracture characteristics, outputting an anatomical region with fracture judgment result as a fracture position, and outputting a corresponding image block.
In a second aspect of the present invention, there is also provided a CBCT database-based maxillofacial fracture identification apparatus, the apparatus comprising: the image input module is used for acquiring the CBCT data of the maxillofacial region to be identified; the bone structure positioning module is used for decomposing the jaw face CBCT data into image block sequences of different anatomical regions according to anatomical structures; the fracture judgment module is used for respectively inputting each image block in the image block sequence into a corresponding trained fracture judgment model to obtain a fracture judgment result sequence of the image block; and the identification output module is used for determining that the fracture judgment result sequence meets the fracture judgment condition and outputting a corresponding image block and a corresponding fracture position.
In a third aspect of the present invention, there is also provided a terminal device, including a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of the recognition method for maxillofacial fractures based on a CBCT database.
In a fourth aspect of the present invention, there is also provided a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to execute the aforementioned recognition method of maxillofacial fractures based on a CBCT database.
Through the technical scheme provided by the invention, the following beneficial effects are achieved: the CBCT data base-based jaw face fracture artificial intelligence diagnosis method provided by the invention is based on a convolutional neural network algorithm, deep learning training is carried out on a jaw face fracture CBCT data model, test set model verification is carried out, artificial intelligence is learned by means of human experience to assist in diagnosis of frostbite and jaw face wound, an intelligent diagnosis platform is formed, and the stability and the reaction efficiency of disease diagnosis and treatment are improved. The system solves the limitation that the traditional diagnosis and treatment depends on professional doctors and specific diagnosis and treatment places, and makes the instant diagnosis and treatment possible on accident sites requiring quick response, accurate judgment and timely treatment, such as major traffic accidents, large-scale sports and the like.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic flow chart of a recognition method for maxillofacial fracture based on CBCT database according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a recognition device for maxillofacial fracture based on CBCT database according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a terminal device provided in an embodiment of the invention;
fig. 4 is a diagram illustrating steps of the recognition method for maxillofacial fracture based on CBCT database according to an embodiment of the present invention.
Detailed Description
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without conflict.
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 is a schematic flow chart of a recognition method for maxillofacial fractures based on a CBCT database according to an embodiment of the present invention, as shown in fig. 1. In a first aspect of the present invention, there is provided a method for identifying a maxillofacial fracture based on a CBCT database, the method comprising: decomposing the jaw face CBCT data to be identified into image block sequences of different anatomical regions according to an anatomical structure; inputting each image block in the image block sequence into a corresponding trained fracture judgment model respectively to obtain a fracture judgment result sequence of the image block; and determining that the fracture judgment result sequence meets fracture judgment conditions, and outputting corresponding image blocks and fracture positions.
Therefore, the maxillofacial CBCT data are decomposed into a plurality of structural members on anatomical structures, whether the fracture exists in the maxillofacial CBCT data is identified by identifying whether a single structural member is fractured, so that the whole maxillofacial CBCT data can be judged, and the fracture position can be identified. The embodiment of the invention decomposes and judges the CBCT data according to the regions by the trained image processing characteristics of machine learning, has the advantages of fast processing and small information loss, and simultaneously avoids the problem of low accuracy caused by the integral judgment of the CBCT data of the maxillofacial region. The algorithm of the embodiment mainly comprises two models: the bone structure positioning model and the fracture distinguishing model are mainly used for finding specific positions and contours of the maxilla and the mandible, extracting different key parts, and then inputting the extracted different parts into the fracture distinguishing model to identify the fracture type.
In an embodiment of the present invention, the decomposing the maxillofacial CBCT data to be identified into image block sequences of different anatomical regions according to the anatomical structure includes: inputting the CBCT image sequence of the CBCT data into a trained bone structure positioning model; generating a detection rectangular frame sequence; positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions; and respectively extracting the image block sequence of each anatomical region layer by layer from the CBCT image sequence in the positioned fault range. The CBCT image sequence is sequentially input into a trained detection model, a detection rectangular frame sequence is output, the fault ranges of 8 anatomical regions are positioned according to detected rectangular frames of different types, then the 8 anatomical regions are respectively extracted from the CBCT image sequence within the positioned fault ranges layer by layer, and as the cheekbone, the mandible, the mandibular angle, the ascending limb, the condylar process and the coracoid process have bilateral symmetry, 13 image block sequences are finally extracted.
In an embodiment provided by the present invention, the bone structure localization model is a convolutional neural network model, the input of the convolutional neural network model is a CBCT image sequence of the CBCT data, and the output is a rectangular frame detection result corresponding to each image in the CBCT image sequence. Specifically, the RetinaNet convolutional neural network structure can be adopted in the embodiment, the input of the model is a single CBCT image sequence, and the output is a corresponding rectangular frame detection result. The detection model RetinaNet may be replaced by other models, such as SSD, YOLO, FasterRCNN, and other target detection models.
In an embodiment of the present invention, the trained bone structure localization model is obtained by the following steps: marking CBCT data with maxillofacial fracture characteristics and marking the CBCT data according to anatomical structures to serve as a first training sample; dividing the first training sample into a training set and a validation set; after the parameters of the bone structure positioning model are preset, iterative training is carried out on the bone structure positioning model by adopting the training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the bone structure positioning model by adopting the training samples in the verification set. The method comprises the following specific steps:
and establishing a sample database. Various fracture subjects are recruited in the maxillofacial surgery outpatient clinic, project contents are informed, informed consent is given, and the two clinics are switched to shoot a large-visual-field CBCT film. The CBCT data of the patient is labeled. And obtaining the mask mark and the mark of the fracture type layer by layer. Exporting the DICOM files of all labels and the label table. This step can be further divided into the following three substeps:
1-1: the standard was included. Chinese 18-80 years old with a history of jaw face trauma fracture within half a month. The inclusion criteria were: 1) the age of the Han adults is 18-80 years old; 2) the history of jaw face trauma fracture is kept within half a month; 3) the maxillofacial region has no history of serious tumors; 4) no systemic bone metabolism disease; 5) no development deformity of maxillofacial region; 6) there was no history of radiotherapy and chemotherapy.
The types of fractures mainly include: left/right coracoid fracture, left/right ascending branch fracture, left/right mandibular angle fracture, left/right mandibular body fracture, left/right condylar fracture, chin fracture, alveolar fracture, maxillary fracture, left/right zygomatic bone fracture. Exclusion criteria were: 1) there are congenital facial asymmetries such as severe deviation of the jaw, deviation of nasal septum, and small ear deformities; 2) there is history of surgery on hard tissues of maxillofacial region; 3) old fracture or greenstick fracture; 4) the whole body is poor in condition and cannot tolerate referral or sit up. 5) Women with pregnancy period of 1-3 months.
1-2: an annotation is obtained. The content of the annotation mainly comprises mask annotation and fracture type annotation layer by layer. And (3) carrying out layer-by-layer labeling, importing the DICOM format of CBCT data of the subject into Mimics Medical 21.0 software for segmentation, and exporting the segmented zygomatic maxilla and mandible mask in the DICOM format. Observing a CBCT fault containing a fracture line, marking and exporting; the fracture type labeling divides the fracture type into 14 types, corresponding to each case, the fracture type of a positive result is labeled as '1', a negative result is labeled as '0', and all labeled DICOM files and labeling tables are exported.
1-3: and exporting the DICOM format file of the label and the original CBCT. The algorithm mainly comprises two models: the bone structure positioning model is mainly used for finding specific positions and contours of the maxilla and the mandible, extracting different key parts, and then inputting the extracted different parts into the discrimination model to recognize the fracture type.
Training samples are formed through the steps, and then the model is trained by the training samples. Taking 10% of training samples, namely 40 samples, as a verification set, and taking the rest as a training set; the model parameters adopt parameters pre-trained on a large natural image data set ImageNet, iterative training is carried out on the model by adopting a gradient descent algorithm SGD, and the optimal parameters of the model are determined according to mAP values on a verification set.
In one embodiment of the present invention, before the dividing the training samples into a training set and a validation set, the method further includes: and mapping the pixel gray value of the CBCT image sequence to a preset pixel gray value range. In an embodiment provided by the present invention, the mapping the pixel gray scale value of the CBCT image to a preset range includes: mapping pixel gray values of the CBCT image to [0, 255] according to a linear mode, wherein the mapping formula is as follows:
y=(x-xmin)/(xmax-xmin)*255
where y represents the mapped value, xminRepresenting the minimum pixel gray value, x, of CBCTmaxRepresenting the maximum pixel gray value of CBCT. In the embodiment, the gray value is compressed into 8 bits, so that the consistency of the training image in gray processing is realized.
In one embodiment of the present invention, the fracture discrimination model is: one of EfficientNet 3, ResNet, DenseNet, and 3D-ResNet. The models are all of a convolutional neural network structure, the input of each model is a single-area image block, and the output of each model is a corresponding binary discrimination result.
The fracture discrimination model in the previous embodiment also needs to be trained. Optionally, the trained fracture discrimination model is obtained through the following steps: determining a model structure of the fracture discrimination model; decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample; the fracture discrimination model of each different anatomical region is trained by adopting the following steps: dividing the partition training samples corresponding to the anatomical region into a training set and a verification set; after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm; and determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set. Specifically, the fracture detection step comprises: carrying out deep learning discrimination model training according to the extracted region image block sequence and the corresponding label, and carrying out automatic identification on various fractures; wherein the model training of the fracture discrimination model comprises: taking 80% of the extracted region image block sequences as a training set and 20% as a verification set; the model parameters adopt parameters pre-trained on a large natural image data set ImageNet, a gradient descent algorithm Adam is adopted to carry out iterative training on the model, and the optimal parameters of the model are determined according to the F1 values on the verification set.
In one embodiment of the present invention, determining that the fracture determination result sequence satisfies fracture determination conditions and outputting corresponding image blocks and fracture positions includes: acquiring the number and the position relation of fracture judgment results in the fracture judgment result sequence, wherein the fracture judgment results are 'fractures'; and determining that the relationship between the number and the position accords with fracture characteristics, outputting an anatomical region with fracture judgment result as a fracture position, and outputting a corresponding image block. Specific fracture detection results are generated: and sequentially inputting the 13 region image block sequences into the trained discrimination model, and outputting to form a binary discrimination result sequence. For a bone structure with bilateral symmetry, the sequence of the discrimination result can be regarded as a left binary sequence and a right binary sequence (0 represents no fracture, and 1 represents fracture); the bone structure without left-right symmetry can be regarded as 1 binary sequence according to the judgment result, and finally 13 binary sequences can be obtained; for any binary sequence, the following rules are made according to the characteristics of fracture continuity and on the premise of ensuring high sensitivity: if the number of the continuous elements 1 in the binary sequence is more than 1, the type fracture identification result of the sample is positive, and a corresponding positive layer image is output, otherwise, the type fracture identification result is negative.
Further, the test group data were labeled by two hospitalizers for maxillofacial surgery as "gold standard" for fracture diagnosis. Comparing the output result of the test group data machine with the 'gold standard', and calculating the sensitivity and specificity of the fracture diagnosis of each part.
In one embodiment provided by the invention, a recognition device for the maxillofacial fracture based on the CBCT database is also provided. Fig. 2 is a schematic structural diagram of a recognition device for maxillofacial fracture based on CBCT database according to an embodiment of the present invention, as shown in fig. 2. The device comprises: the image input module is used for acquiring the CBCT data of the maxillofacial region to be identified; the bone structure positioning module is used for decomposing the jaw face CBCT data into image block sequences of different anatomical regions according to anatomical structures; the fracture judgment module is used for respectively inputting each image block in the image block sequence into a corresponding trained fracture judgment model to obtain a fracture judgment result sequence of the image block; and the identification output module is used for determining that the fracture judgment result sequence meets the fracture judgment condition and outputting a corresponding image block and a corresponding fracture position.
For the specific definition of the recognition device for the maxillofacial fracture based on the CBCT database, reference may be made to the above definition of the recognition method for the maxillofacial fracture based on the CBCT database, which is not described herein again. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment of the present invention, there is also provided a terminal device, including a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor executes the computer program to implement the steps of the recognition method for maxillofacial fracture based on CBCT database.
Fig. 3 is a schematic diagram of a terminal device according to an embodiment of the present invention, as shown in fig. 3. The terminal device 10 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. Terminal device 10 may include, but is not limited to, a processor 100, a memory 101. Those skilled in the art will appreciate that fig. 3 is merely an example of a terminal device 10 and does not constitute a limitation of terminal device 10 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 100 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 101 may be an internal storage unit of the terminal device 10, such as a hard disk or a memory of the terminal device 10. The memory 101 may also be an external storage device of the terminal device 10, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 10. Further, the memory 101 may also include both an internal storage unit of the terminal device 10 and an external storage device. The memory 101 is used for storing the computer program 102 and other programs and data required by the terminal device 10. The memory 101 may also be used to temporarily store data that has been output or is to be output.
Fig. 4 is a diagram illustrating steps of the recognition method for maxillofacial fracture based on CBCT database according to an embodiment of the present invention, as shown in fig. 4. The embodiment of the invention provides a CBCT database-based maxillofacial fracture identification method and device aiming at the problems of complex identification and low accuracy rate of CBCT data fracture identification. The embodiment provided by the invention is applied to a medical image processing system.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (7)

1. A recognition method of maxillofacial fracture based on CBCT database, characterized in that the method comprises:
decomposing the jaw face CBCT data to be identified into image block sequences of different anatomical regions according to an anatomical structure;
inputting each image block in the image block sequence into a trained fracture judgment model of a corresponding anatomical region respectively to obtain a fracture judgment result of the image block;
determining that a sequence formed by the fracture judgment results meets fracture judgment conditions, and outputting corresponding image blocks and fracture positions;
the trained fracture discrimination model is obtained through the following steps:
determining a model structure of the fracture discrimination model;
decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample;
the fracture discrimination model of each different anatomical region is trained by adopting the following steps:
dividing the partition training samples corresponding to the anatomical region into a training set and a verification set;
after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm;
determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set;
decomposing the jaw face CBCT data to be identified into image block sequences of different anatomical regions according to an anatomical structure, wherein the image block sequences comprise:
inputting the image sequence of the CBCT data into a trained bone structure positioning model;
generating a detection rectangular frame sequence;
positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions;
respectively extracting image block sequences of each anatomical region layer by layer from the CBCT image sequences in the positioned fault range;
determining that the fracture judgment result sequence meets fracture judgment conditions, and outputting corresponding image blocks and fracture positions, wherein the method comprises the following steps:
acquiring the number and the position relation of fracture judgment results in the fracture judgment result sequence, wherein the fracture judgment results are 'fractures';
determining that the number and the positional relationship conform to fracture characteristics,
and outputting the anatomical region with the fracture judgment result as the fracture position, and outputting the corresponding image block.
2. The method of claim 1, wherein the bone structure localization model is a convolutional neural network model, wherein the convolutional neural network model has an input of the image sequence of CBCT data and an output of a rectangular box detection result corresponding to each image in the image sequence of CBCT data.
3. The method according to claim 2, wherein the trained bone structure localization model is obtained by:
marking CBCT data with maxillofacial fracture characteristics and marking the CBCT data according to anatomical structures to serve as a first training sample;
dividing the first training sample into a training set and a validation set;
after the parameters of the bone structure positioning model are preset, iterative training is carried out on the bone structure positioning model by adopting the training samples in the training set and a gradient descent algorithm;
and determining the optimal parameters of the bone structure positioning model by adopting the training samples in the verification set.
4. The method of claim 3, wherein prior to separating the first training sample into a training set and a validation set, the method further comprises:
and mapping the pixel gray value of the image sequence of the CBCT data to a preset pixel gray value range.
5. The method of claim 1, wherein the fracture discrimination model is: one of EfficientNet 3, ResNet, DenseNet, and 3D-ResNet.
6. An apparatus for identifying maxillofacial fractures based on a CBCT database, the apparatus comprising:
the image input module is used for acquiring the CBCT data of the maxillofacial region to be identified;
the bone structure positioning module is used for decomposing the jaw face CBCT data into image block sequences of different anatomical regions according to anatomical structures, and comprises:
inputting the image sequence of the CBCT data into a trained bone structure positioning model; generating a detection rectangular frame sequence; positioning the fault range of each anatomical region according to the rectangular frames of different anatomical regions; respectively extracting image block sequences of each anatomical region layer by layer from the CBCT image sequences in the positioned fault range;
the fracture judgment module is used for respectively inputting each image block in the image block sequence into a trained fracture judgment model of a corresponding anatomical region to obtain a fracture judgment result sequence of the image block; the trained fracture discrimination model is obtained through the following steps: determining a model structure of the fracture discrimination model; decomposing the first training sample into image block sequences of different anatomical regions according to the anatomical structure, wherein the image block sequence of each anatomical region forms a partition training sample; the fracture discrimination model of each different anatomical region is trained by adopting the following steps: dividing the partition training samples corresponding to the anatomical region into a training set and a verification set; after the parameters of the fracture discrimination model are preset, iterative training is carried out on the fracture discrimination model by adopting training samples in the training set and a gradient descent algorithm; determining the optimal parameters of the fracture discrimination model by adopting the training samples in the verification set;
the identification output module is used for determining that the fracture judgment result sequence meets the fracture judgment condition and outputting a corresponding image block and a corresponding fracture position, and comprises: acquiring the number and the position relation of fracture judgment results in the fracture judgment result sequence, wherein the fracture judgment results are 'fractures'; and determining that the relationship between the number and the position accords with fracture characteristics, outputting an anatomical region with fracture judgment result as a fracture position, and outputting a corresponding image block.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the CBCT database based maxillofacial fracture identification method of any one of claims 1 to 5.
CN202011046290.5A 2020-09-29 2020-09-29 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment Active CN111967539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011046290.5A CN111967539B (en) 2020-09-29 2020-09-29 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011046290.5A CN111967539B (en) 2020-09-29 2020-09-29 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment

Publications (2)

Publication Number Publication Date
CN111967539A CN111967539A (en) 2020-11-20
CN111967539B true CN111967539B (en) 2021-08-31

Family

ID=73386806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011046290.5A Active CN111967539B (en) 2020-09-29 2020-09-29 Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment

Country Status (1)

Country Link
CN (1) CN111967539B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113488161A (en) * 2021-07-05 2021-10-08 中国人民解放军总医院第一医学中心 Temporomandibular joint disorder treatment regimen recommendation apparatus, device and storage medium
CN116862869B (en) * 2023-07-07 2024-04-19 东北大学 Automatic detection method for mandible fracture based on mark point detection

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242865A (en) * 2018-09-26 2019-01-18 上海联影智能医疗科技有限公司 Medical image auto-partition system, method, apparatus and storage medium based on multichannel chromatogram
CN110415261A (en) * 2019-08-06 2019-11-05 山东财经大学 A kind of the expression animation conversion method and system of subregion training
CN110895812A (en) * 2019-11-28 2020-03-20 北京推想科技有限公司 CT image detection method and device, storage medium and electronic equipment
CN110974288A (en) * 2019-12-26 2020-04-10 北京大学口腔医学院 Periodontal disease CBCT longitudinal data recording and analyzing method
US20200151921A1 (en) * 2018-11-14 2020-05-14 Carestream Dental Llc Methods for metal artifact reduction in cone beam reconstruction
CN111419372A (en) * 2020-04-03 2020-07-17 河北医科大学第二医院 Mandible resetting device
CN111462071A (en) * 2020-03-30 2020-07-28 浙江核新同花顺网络信息股份有限公司 Image processing method and system
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105342708B (en) * 2015-12-14 2017-10-24 四川大学 Digitlization bite guide and its method for reconstructing based on CT and CBCT fused datas
CN109635669B (en) * 2018-11-19 2021-06-29 北京致远慧图科技有限公司 Image classification method and device and classification model training method and device
CN110503652B (en) * 2019-08-23 2022-02-25 北京大学口腔医学院 Method and device for determining relationship between mandible wisdom tooth and adjacent teeth and mandible tube, storage medium and terminal
CN111325745B (en) * 2020-03-09 2023-08-25 北京深睿博联科技有限责任公司 Fracture region analysis method and device, electronic equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242865A (en) * 2018-09-26 2019-01-18 上海联影智能医疗科技有限公司 Medical image auto-partition system, method, apparatus and storage medium based on multichannel chromatogram
US20200151921A1 (en) * 2018-11-14 2020-05-14 Carestream Dental Llc Methods for metal artifact reduction in cone beam reconstruction
CN110415261A (en) * 2019-08-06 2019-11-05 山东财经大学 A kind of the expression animation conversion method and system of subregion training
CN110895812A (en) * 2019-11-28 2020-03-20 北京推想科技有限公司 CT image detection method and device, storage medium and electronic equipment
CN110974288A (en) * 2019-12-26 2020-04-10 北京大学口腔医学院 Periodontal disease CBCT longitudinal data recording and analyzing method
CN111462071A (en) * 2020-03-30 2020-07-28 浙江核新同花顺网络信息股份有限公司 Image processing method and system
CN111419372A (en) * 2020-04-03 2020-07-17 河北医科大学第二医院 Mandible resetting device
CN111667474A (en) * 2020-06-08 2020-09-15 杨天潼 Fracture identification method, apparatus, device and computer readable storage medium

Also Published As

Publication number Publication date
CN111967539A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
Putra et al. Current applications and development of artificial intelligence for digital dental radiography
US10085707B2 (en) Medical image information system, medical image information processing method, and program
CN111967539B (en) Recognition method and device for maxillofacial fracture based on CBCT database and terminal equipment
CN112150472A (en) Three-dimensional jaw bone image segmentation method and device based on CBCT (cone beam computed tomography) and terminal equipment
CN111798976A (en) DDH artificial intelligence auxiliary diagnosis method and device
JP7080304B2 (en) Learning support device, learning support method, learning support program, region of interest discrimination device, region of interest discrimination method, region of interest discrimination program and trained model
Lang et al. Automatic localization of landmarks in craniomaxillofacial CBCT images using a local attention-based graph convolution network
CN112150473A (en) Three-dimensional jaw bone image segmentation modeling method and device based on CT and terminal equipment
WO2023029348A1 (en) Image instance labeling method based on artificial intelligence, and related device
Xu et al. A deep-learning aided diagnostic system in assessing developmental dysplasia of the hip on pediatric pelvic radiographs
CN111967540B (en) Maxillofacial fracture identification method and device based on CT database and terminal equipment
Radwan et al. Artificial intelligence‐based algorithm for cervical vertebrae maturation stage assessment
WO2022247007A1 (en) Medical image grading method and apparatus, electronic device, and readable storage medium
Porras et al. Personalized optimal planning for the surgical correction of metopic craniosynostosis
Chen et al. Detection of various dental conditions on dental panoramic radiography using Faster R-CNN
Banumathi et al. Diagnosis of dental deformities in cephalometry images using support vector machine
CN114004940B (en) Non-rigid generation method, device and equipment of face defect reference data
Hwang et al. SinusC-Net for automatic classification of surgical plans for maxillary sinus augmentation using a 3D distance-guided network
Tu et al. Quantitative evaluation of local head malformations from 3 dimensional photography: application to craniosynostosis
El-Fegh et al. Automated 2-D cephalometric analysis of X-ray by image registration approach based on least square approximator
Millan-Arias et al. General Cephalometric Landmark Detection for Different Source of X-Ray Images
Tanikawa et al. Machine learning for facial recognition in orthodontics
Sadr et al. Deep learning for tooth identification and enumeration in panoramic radiographs
Wang et al. Automated segmentation of CBCT image with prior-guided sequential random forest
Niño-Sandoval et al. Biotypic classification of facial profiles using discrete cosine transforms on lateral radiographs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant