CN110610198A - Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method - Google Patents
Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method Download PDFInfo
- Publication number
- CN110610198A CN110610198A CN201910776183.9A CN201910776183A CN110610198A CN 110610198 A CN110610198 A CN 110610198A CN 201910776183 A CN201910776183 A CN 201910776183A CN 110610198 A CN110610198 A CN 110610198A
- Authority
- CN
- China
- Prior art keywords
- image
- mask
- mandibular
- mandibular nerve
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
An oral CBCT image mandibular nerve canal automatic identification method based on Mask RCNN collects oral CBCT image data, preprocesses a coronal image, removes an image of a mandibular nerve foramen which is not shown in the coronal image, compresses the format of the image, and manually aims at points of the mandibular nerve foramen; generating an aiming frame, positioning a rectangular frame, obtaining a binary mask of a target example, and establishing a neural network model; training a model; and identifying and displaying the mandibular nerve foramen through the trained model. The invention uses Mask RCNN to automatically identify the mandibular nerve tube in the CBCT image. The machine recognition is used for replacing manual identification of the dental nerve holes of the coronal view, so that the labor cost is saved, the stability of the generated mandibular neural tube trajectory is improved, and the method has good identification speed and precision.
Description
Technical Field
The invention relates to the field of medical images and the technical field of image recognition, in particular to an automatic oral CBCT image mandibular nerve tube recognition method based on Mask RCNN.
Background
In recent years, the demand for tooth implantation in China has increased year by year, and China has become one of the fastest growing tooth implantation markets from hundreds of thousands of tooth implantations in 11 years to millions of implant numbers nowadays. By conservative estimates, the potential market for dental implants may reach 4000 billion dollars. In actual dentist clinical practice, a key concern is the inability to compress the dental nerve, particularly the mandibular nerve canal. Therefore, before dental implantation, oral CT examination must be performed to determine the location of the dental nerve tube to ensure successful implantation.
Cbct (cone beam CT), i.e., cone beam CT, is commonly used for obtaining dental images. It can obtain image with higher resolution and possesses metal artifact correction technique which is widely accepted in the industry. The existing traditional identification method mainly adopts an artificial method to directly identify the dental neural tube on a panoramic image synthesized by CBCT or indirectly identify the dental neural hole on a coronal image of CBCT. Medical images tend to have a high noise ratio, and even if recognized manually, the recognizer needs to have a certain amount of expertise and skill, which also increases the labor cost virtually. Therefore, the manual method may bring a series of problems of uncertainty error, high cost, low efficiency and the like.
In view of the obvious effect of the Convolutional Neural Network (CNN) on image pattern recognition, the residual error network (Resnet) well treats the side effects of the neural network such as degradation caused by the increase of the depth. Mask RCNN (convolutional neural network based on Mask region) is just an example segmentation model based on convolutional neural network and residual error network, and is a proven structure which is excellent in the field of target detection and is one of the mainstream frameworks of deep learning nowadays. The invention relates to a Mask RCNN-based mandibular neural tube automatic identification method, which constructs the identification information into a three-dimensional mandibular neural tube track through automatically identifying mandibular neural hole information of a two-dimensional coronal image of an oral cavity CBCT to obtain the position of the mandibular neural tube in a three-dimensional view of the oral cavity. Mask RCNN identification is rapid, and the result is accurate. Compared with a Neural Network such as fast Region-based Neural Network (fast RCNN), YoLO (you Only Look one), and the like, the target instance contour can be identified; compared with the existing mandibular neural tube identification, the method can automatically extract the characteristics of the image, and has the characteristics of low cost, high stability and the like.
Disclosure of Invention
In order to overcome the problems caused by the existing method based on manual identification of the mandibular neural canal, the invention provides a method for automatically identifying the mandibular neural canal based on Mask RCNN, which has low cost and high identification stability.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an oral CBCT image mandibular nerve tube automatic identification method based on Mask RCNN comprises the following steps:
the method comprises the following steps: collecting oral CBCT image data, and preprocessing a coronal map of the oral CBCT image data: denoising, namely removing an image of the mandibular nerve foramen not shown in the coronal image of the CBCT sequence image; carrying out format compression on the denoised image to reduce the training complexity; manually aiming points along the outer contour of the mandibular nerve foramen by using a point aiming tool, taking point coordinates as label data, and storing the label data in a dictionary form;
step two: establishing a neural network model: obtaining multi-scale feature mapping by using a feature pyramid network as input of a region generation network, generating an aiming frame, classifying, positioning a rectangular frame, and finally obtaining a binary mask of a target example, wherein a main network adopts a resnet-101 network structure;
step three: training a model: fixing hyper-parameters except for the area generation network for training so as to reduce the training cost of Mask RCNN as much as possible; generating a mask by using label data stored in a dictionary form, determining a rectangular frame, and setting classification types as a mandibular nerve foramen and a background;
step four: identifying the mandibular nerve foramen: putting a CBCT image to be processed into the trained model, acquiring numerical values of the classification and rectangular frames, then acquiring numerical values of the mask, and finally determining the corresponding position of the denoised mandibular nerve foramen through the input of the mask and the corresponding image;
step five: and (3) displaying a mandibular nerve tube: and according to the obtained corresponding information of the coronal image and the mandibular neural foramen, obtaining the three-dimensional information of the mandibular neural tube position and displaying the three-dimensional information in a three-dimensional view, thereby achieving the purpose of automatically identifying the mandibular neural tube.
Further, in the third step, the method for generating the mask by the tag data includes: generating a matrix with the same size as the original image and all values of 0, assigning the value of the position corresponding to the point coordinate represented by the label data as 1, and obtaining the matrix which is a binary mask; the method for determining the rectangular frame comprises the following steps: making a minimum matrix capable of selecting a mask, wherein h is the matrix length, w is the matrix height, and (x, y) is the coordinates of the right lower vertex of the matrix, so that the rectangular frame is the (h, w, x, y) four-dimensional coordinate information;
further, in the fourth step, the method for obtaining the position of the mandibular nerve foramen comprises the following steps: because the size of the mask matrix is consistent with that of the image matrix of the corresponding coronal image, the point coordinate with the median value of 1 in the mask matrix can be directly corresponding to the coronal image, the coordinate points are connected in sequence, and the connected closed area is the obtained position of the mandibular nerve foramen.
The technical conception of the invention is as follows: the mandibular nerve tube in the CBCT image is automatically identified using Mask RCNN, and the position of the mandibular nerve tube in the three-dimensional view is obtained by identifying the mandibular nerve hole in the two-dimensional coronal view.
The invention has the beneficial effects that: the machine recognition replaces the manual recognition of the dental nerve holes of the coronal image, the stability of the generated mandibular neural tube track is improved, and the machine recognition has good recognition speed and accuracy.
Drawings
FIG. 1 is a flow chart.
Fig. 2 is a coronal view.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 and 2, a method for identifying a dental neural tube based on Mask RCNN neural network includes the following steps:
the method comprises the following steps: collecting oral CBCT image data, and preprocessing a coronal map of the oral CBCT image data: denoising, namely removing an image of the mandibular nerve foramen not shown in the coronal image of the CBCT sequence image; carrying out format compression on the denoised image to reduce the training complexity; manually aiming points along the outer contour of the mandibular nerve foramen by using an aiming point tool, taking point coordinates shown by the aiming points of the mandibular nerve foramen in fig. 2 as label data, and storing the label data in a dictionary form;
step two: establishing a neural network model: obtaining multi-scale feature mapping by using a feature pyramid network as input of a region generation network, generating an aiming frame, classifying, positioning a rectangular frame, and finally obtaining a binary mask of a target example, wherein a main network adopts a resnet-101 network structure;
step three: training a model: fixing hyper-parameters except for the area generation network for training so as to reduce the training cost of Mask RCNN as much as possible; generating a mask by using label data stored in a dictionary form, determining a rectangular frame, and setting classification types as a mandibular nerve foramen and a background;
step four: identifying the mandibular nerve foramen: putting a CBCT image to be processed into the trained model, acquiring numerical values of the classification and rectangular frames, then acquiring numerical values of the mask, and finally determining the corresponding position of the denoised mandibular nerve foramen through the input of the mask and the corresponding image;
step five: and (3) displaying a mandibular nerve tube: according to the obtained corresponding information of the coronal view and the mandibular neural foramen, three-dimensional information of the mandibular neural canal position is obtained and displayed in the three-dimensional view, so as to achieve the purpose of automatically identifying the mandibular neural canal, and the whole flow is shown in fig. 1.
Further, in the third step, the method for generating the mask by the tag data includes: generating a matrix with the same size as the original image and all values of 0, assigning the value of the position corresponding to the point coordinate represented by the label data as 1, and obtaining the matrix which is a binary mask; the method for determining the rectangular frame comprises the following steps: making a minimum matrix capable of selecting a mask, wherein h is the matrix length, w is the matrix height, and (x, y) is the coordinates of the right lower vertex of the matrix, so that the rectangular frame is the (h, w, x, y) four-dimensional coordinate information;
further, in the fourth step, the method for obtaining the position of the mandibular nerve foramen comprises the following steps: because the size of the mask matrix is consistent with that of the image matrix of the corresponding coronal image, the point coordinate with the median value of 1 in the mask matrix can be directly corresponding to the coronal image, the coordinate points are connected in sequence, and the connected closed area is the obtained position of the mandibular nerve foramen.
As described above, the specific implementation steps implemented in this patent make the present invention clearer, provide more intuitive and accurate grasp of the dental neural tube identification, implement a completely new concept and method for identifying the dental neural tube, and provide assistance to the dentist for clinical surgery, thereby improving the accuracy. Any modification and variation of the present invention within the spirit of the present invention and the scope of the claims will fall within the scope of the present invention.
Claims (3)
1. An oral CBCT image mandibular nerve tube automatic identification method based on Mask RCNN is characterized by comprising the following steps:
the method comprises the following steps: collecting oral CBCT image data, and preprocessing a coronal map of the oral CBCT image data: denoising, namely removing an image of the mandibular nerve foramen not shown in the coronal image of the CBCT sequence image; carrying out format compression on the denoised image to reduce the training complexity; manually aiming points along the outer contour of the mandibular nerve foramen by using a point aiming tool, taking point coordinates as label data, and storing the label data in a dictionary form;
step two: establishing a neural network model: obtaining multi-scale feature mapping by using a feature pyramid network as input of a region generation network, generating an aiming frame, classifying, positioning a rectangular frame, and finally obtaining a binary mask of a target example, wherein a main network adopts a resnet-101 network structure;
step three: training a model: fixing hyper-parameters except for the area generation network for training so as to reduce the training cost of Mask RCNN as much as possible; generating a mask by using label data stored in a dictionary form, determining a rectangular frame, and setting classification types as a mandibular nerve foramen and a background;
step four: identifying the mandibular nerve foramen: putting a CBCT image to be processed into the trained model, acquiring numerical values of the classification and rectangular frames, then acquiring numerical values of the mask, and finally determining the corresponding position of the denoised mandibular nerve foramen through the input of the mask and the corresponding image;
step five: and (3) displaying a mandibular nerve tube: and according to the obtained corresponding information of the coronal image and the mandibular neural foramen, obtaining the three-dimensional information of the mandibular neural tube position and displaying the three-dimensional information in a three-dimensional view, thereby achieving the purpose of automatically identifying the mandibular neural tube.
2. The Mask RCNN-based oral CBCT image automatic mandibular nerve canal identification method of claim 1, wherein: in the third step, the method for generating the mask by the label data comprises the following steps: generating a matrix with the same size as the original image and all values of 0, assigning the value of the position corresponding to the point coordinate represented by the label data as 1, and obtaining the matrix which is a binary mask; the method for determining the rectangular frame comprises the following steps: and (5) making a minimum matrix capable of selecting a mask, wherein h is the matrix length, w is the matrix height, and (x, y) is the coordinates of the right lower vertex of the matrix, so that the rectangular frame is the (h, w, x, y) four-dimensional coordinate information.
3. The method for automatically identifying mandibular nerve canal based on Mask RCNN oral CBCT image as claimed in claim 1 or 2 wherein: in the fourth step, the method for acquiring the position of the mandibular nerve foramen comprises the following steps: because the size of the mask matrix is consistent with that of the image matrix of the corresponding coronal image, the point coordinate with the median value of 1 in the mask matrix can be directly corresponding to the coronal image, the coordinate points are connected in sequence, and the connected closed area is the obtained position of the mandibular nerve foramen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910776183.9A CN110610198A (en) | 2019-08-22 | 2019-08-22 | Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910776183.9A CN110610198A (en) | 2019-08-22 | 2019-08-22 | Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110610198A true CN110610198A (en) | 2019-12-24 |
Family
ID=68890394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910776183.9A Pending CN110610198A (en) | 2019-08-22 | 2019-08-22 | Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110610198A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418109A (en) * | 2020-11-26 | 2021-02-26 | 复旦大学附属中山医院 | Image processing method and device |
CN113643446A (en) * | 2021-08-11 | 2021-11-12 | 北京朗视仪器股份有限公司 | Automatic marking method and device for mandibular neural tube and electronic equipment |
CN113658679A (en) * | 2021-07-13 | 2021-11-16 | 南京邮电大学 | Automatic evaluation method for alveolar nerve injury risk under medical image |
CN114677374A (en) * | 2022-05-27 | 2022-06-28 | 杭州键嘉机器人有限公司 | Method for extracting central line and calculating radius of mandibular neural tube |
CN115830034A (en) * | 2023-02-24 | 2023-03-21 | 淄博市中心医院 | Data analysis system for oral health management |
US11890124B2 (en) | 2021-02-01 | 2024-02-06 | Medtronic Navigation, Inc. | Systems and methods for low-dose AI-based imaging |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110045431A1 (en) * | 2008-11-18 | 2011-02-24 | Groscurth Randall C | Bone screw linking device |
CN102626347A (en) * | 2012-04-26 | 2012-08-08 | 上海优益基医疗器械有限公司 | Method for manufacturing oral implant positioning guiding template based on CBCT data |
US20140227655A1 (en) * | 2013-02-12 | 2014-08-14 | Ormco Corporation | Integration of model data, surface data, and volumetric data |
CN108470375A (en) * | 2018-04-26 | 2018-08-31 | 重庆市劢齐医疗科技有限责任公司 | Nerve trachea automatic detection algorithm based on deep learning |
CN109816686A (en) * | 2019-01-15 | 2019-05-28 | 山东大学 | Robot semanteme SLAM method, processor and robot based on object example match |
-
2019
- 2019-08-22 CN CN201910776183.9A patent/CN110610198A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110045431A1 (en) * | 2008-11-18 | 2011-02-24 | Groscurth Randall C | Bone screw linking device |
CN102626347A (en) * | 2012-04-26 | 2012-08-08 | 上海优益基医疗器械有限公司 | Method for manufacturing oral implant positioning guiding template based on CBCT data |
US20140227655A1 (en) * | 2013-02-12 | 2014-08-14 | Ormco Corporation | Integration of model data, surface data, and volumetric data |
CN108470375A (en) * | 2018-04-26 | 2018-08-31 | 重庆市劢齐医疗科技有限责任公司 | Nerve trachea automatic detection algorithm based on deep learning |
CN109816686A (en) * | 2019-01-15 | 2019-05-28 | 山东大学 | Robot semanteme SLAM method, processor and robot based on object example match |
Non-Patent Citations (2)
Title |
---|
ZHIMING CUI ET AL: "ToothNet: Automatic Tooth Instance Segmentation and Identification from Cone", 《 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 * |
鞠昊 等: "CBCT的基本原理及在口腔各科的应用进展", 《医学影像杂志》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418109A (en) * | 2020-11-26 | 2021-02-26 | 复旦大学附属中山医院 | Image processing method and device |
CN112418109B (en) * | 2020-11-26 | 2024-05-14 | 复旦大学附属中山医院 | Image processing method and device |
US11890124B2 (en) | 2021-02-01 | 2024-02-06 | Medtronic Navigation, Inc. | Systems and methods for low-dose AI-based imaging |
CN113658679A (en) * | 2021-07-13 | 2021-11-16 | 南京邮电大学 | Automatic evaluation method for alveolar nerve injury risk under medical image |
CN113658679B (en) * | 2021-07-13 | 2024-02-23 | 南京邮电大学 | Automatic assessment method for risk of alveolar nerve injury under medical image |
CN113643446A (en) * | 2021-08-11 | 2021-11-12 | 北京朗视仪器股份有限公司 | Automatic marking method and device for mandibular neural tube and electronic equipment |
CN114677374A (en) * | 2022-05-27 | 2022-06-28 | 杭州键嘉机器人有限公司 | Method for extracting central line and calculating radius of mandibular neural tube |
CN115830034A (en) * | 2023-02-24 | 2023-03-21 | 淄博市中心医院 | Data analysis system for oral health management |
CN115830034B (en) * | 2023-02-24 | 2023-05-09 | 淄博市中心医院 | Data analysis system for oral health management |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110610198A (en) | Mask RCNN-based automatic oral CBCT image mandibular neural tube identification method | |
CN112120810A (en) | Three-dimensional data generation method of tooth orthodontic concealed appliance | |
CN110956635A (en) | Lung segment segmentation method, device, equipment and storage medium | |
CN107203998B (en) | Method for carrying out dentition segmentation on cone beam CT image | |
CN107680110B (en) | Inner ear three-dimensional level set segmentation method based on statistical shape model | |
CN112515787B (en) | Three-dimensional dental data analysis method | |
CN110689564B (en) | Dental arch line drawing method based on super-pixel clustering | |
CN114757960B (en) | Tooth segmentation and reconstruction method based on CBCT image and storage medium | |
CN110363750B (en) | Automatic extraction method for root canal morphology based on multi-mode data fusion | |
JP7261245B2 (en) | Methods, systems, and computer programs for segmenting pulp regions from images | |
CN111685899A (en) | Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models | |
CN113223010A (en) | Method and system for fully automatically segmenting multiple tissues of oral cavity image | |
CN114638852A (en) | Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image | |
CN113344950A (en) | CBCT image tooth segmentation method combining deep learning with point cloud semantics | |
CN115457198A (en) | Tooth model generation method and device, electronic equipment and storage medium | |
CN111627014A (en) | Root canal detection and scoring method and system based on deep learning | |
CN114642444A (en) | Oral implantation precision evaluation method and system and terminal equipment | |
CN117011318A (en) | Tooth CT image three-dimensional segmentation method, system, equipment and medium | |
KR102255592B1 (en) | method of processing dental CT images for improving precision of margin line extracted therefrom | |
CN111986216A (en) | RSG liver CT image interactive segmentation algorithm based on neural network improvement | |
CN116823729A (en) | Alveolar bone absorption judging method based on SegFormer and oral cavity curved surface broken sheet | |
CN113506301B (en) | Tooth image segmentation method and device | |
CN110327072B (en) | Nondestructive testing method for measuring specification parameters of oral surgery implant | |
US20240127445A1 (en) | Method of segmenting computed tomography images of teeth | |
CN114359317A (en) | Blood vessel reconstruction method based on small sample identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191224 |
|
RJ01 | Rejection of invention patent application after publication |