CN112561865A - Constant molar position detection model training method, system and storage medium - Google Patents

Constant molar position detection model training method, system and storage medium Download PDF

Info

Publication number
CN112561865A
CN112561865A CN202011406995.3A CN202011406995A CN112561865A CN 112561865 A CN112561865 A CN 112561865A CN 202011406995 A CN202011406995 A CN 202011406995A CN 112561865 A CN112561865 A CN 112561865A
Authority
CN
China
Prior art keywords
image
images
feature
detection model
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011406995.3A
Other languages
Chinese (zh)
Other versions
CN112561865B (en
Inventor
黄少宏
赵志广
范卫华
李菊红
易超
林良强
李剑波
武剑
朱佳
刘勇
严志文
邢玉林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gree Health Management Co ltd
Original Assignee
Shenzhen Gree Health Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gree Health Management Co ltd filed Critical Shenzhen Gree Health Management Co ltd
Priority to CN202011406995.3A priority Critical patent/CN112561865B/en
Publication of CN112561865A publication Critical patent/CN112561865A/en
Application granted granted Critical
Publication of CN112561865B publication Critical patent/CN112561865B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system and a storage medium for training a detection model of a constant molar position, wherein the method comprises the following steps: acquiring a plurality of oral cavity dentition images as first images; performing first feature extraction on the first image to obtain a first feature image; selectively activating the first feature image in a channel dimension and a space dimension respectively; performing second feature extraction on the first feature image after the selection activation; a detection frame for predicting the permanent molar position according to the image features extracted by the second features; fusing the information of the detection frame to generate a first confidence coefficient; taking the maximum first confidence as a prediction output result; and reversely updating the parameters of the detection model according to the prediction output result. The detection model disclosed by the invention is more targeted and more suitable for different application situations in the application process, so that the accuracy of the detection result of the detection model in the application process is improved. The invention can be widely applied to the technical field of model training.

Description

Constant molar position detection model training method, system and storage medium
Technical Field
The invention relates to the technical field of model training, in particular to a method and a system for training a detection model of a constant molar position and a storage medium.
Background
Caries is a chronic disease in which bacteria (cariogenic bacteria) produce acid using carbohydrates in food, resulting in progressive destruction of hard tissues of teeth, is a major common disease of the oral cavity and is one of the most common diseases of human beings. Children of school age 6-12 years old are often fond of sweet food, soft food and sticky food and are easy to stick on teeth, but the oral hygiene habits of the children at this stage are often not good or the cleaning method is not well mastered, so that the teeth are difficult to clean effectively, and the children are high-incidence people suffering from caries. In addition, when the children are in the tooth replacement stage at this stage, the newly erupted permanent teeth, especially the newly erupted permanent molar fossae are often deep, bacteria are easy to accumulate and are not easy to clean, and the bacteria generate acidic secretion to damage the hard tissues of the teeth, so that decayed teeth are easy to generate. Therefore, timely screening of oral cavity conditions of children and early intervention are the primary and secondary symptoms of preventing permanent tooth caries. The pit and furrow closing is the best method recommended by the world health organization for preventing permanent tooth caries, is generally popularized in many countries and regions, and is also widely popularized in China for the purpose of closing the pit and furrow of school-age children. The pit and fissure sealing refers to a method for preventing pit and fissure caries by coating pit and fissure sealing material on pit and fissure points on occlusal surface and buccal and lingual surface of dental crown, solidifying and hardening after the pit and fissure sealing material flows into and permeates into the pit and fissure to form a protective barrier which is covered on deep pit and fissure and can prevent cariogenic bacteria and acidic metabolites from eroding tooth body.
The first step in the closure of the sulcus is to screen out the permanent molars for indications. The previous pit and fissure sealing projects are screened by the manual force of stomatologists, and a large amount of manpower, material resources and financial resources are required. In order to save the manpower, material resources and financial resources of the pit and fissure sealing project, a method for detecting and early warning teeth on line is provided. However, the current online detection model lacks pertinence and adaptive situations, so that the accuracy of the identification result of the permanent molars is not high in the application process.
Disclosure of Invention
To solve one of the above technical problems, the present invention aims to: provided are a constant molar position detection model training method, system and storage medium, which can improve the accuracy of a detection result of a detection model in an application process.
In a first aspect, an embodiment of the present invention provides:
a constant molar position detection model training method comprises the following steps:
acquiring a plurality of oral cavity dentition images as first images;
performing first feature extraction on the first image to obtain a first feature image;
selectively activating the first feature image in a channel dimension and a space dimension respectively;
performing second feature extraction on the first feature image after the selection activation;
a detection frame for predicting the permanent molar position according to the image features extracted by the second features;
fusing the information of the detection frame to generate a first confidence coefficient;
taking the maximum first confidence as a prediction output result;
and reversely updating the parameters of the detection model according to the prediction output result.
Further, the acquiring a plurality of oral dentition images as a first image includes:
acquiring a plurality of oral cavity dentition images;
processing the sizes of the oral cavity dentition images into preset sizes;
carrying out interlaced sampling on the dental image after the size processing to obtain a plurality of sub-images;
and taking the spliced image of the plurality of sub-images as a first image.
Further, the performing the first feature extraction on the first image to obtain a first feature image includes:
performing down-sampling feature extraction on the first image to generate feature images with different sizes;
and taking the characteristic images with different sizes as first characteristic images.
Further, the selectively activating the first feature image in a channel dimension includes:
performing average pooling and maximum pooling on the first feature images;
and selectively activating the first feature images after the average pooling and the maximum pooling in the channel dimension by using an attention mechanism. Further, the selectively activating the first feature image in the spatial dimension specifically includes:
the first image after attention mechanism processing is selectively activated in the spatial dimension using a convolution kernel.
Further, the second feature extraction is abstract feature extraction.
Further, the fusing the information of the detection frame to generate a first confidence level includes:
fusing the position information, the second confidence coefficient and the image characteristic information of the detection frame;
and generating a first confidence coefficient according to the fusion result.
In a second aspect, an embodiment of the present invention provides:
a constant molar position detection model training system, comprising:
the acquisition module is used for acquiring a plurality of oral cavity dentition images as first images;
the first feature extraction module is used for performing first feature extraction on the first image to obtain a first feature image;
the activation module is used for selectively activating the first characteristic image in a channel dimension and a space dimension respectively;
the second feature extraction module is used for performing second feature extraction on the selected and activated first feature image;
the prediction module is used for predicting a detection frame of the constant molar position according to the image features extracted by the second features;
the fusion module is used for fusing the information of the detection frame to generate a first confidence coefficient;
the confidence coefficient selection module is used for taking the maximum first confidence coefficient as a prediction output result;
and the parameter updating module is used for reversely updating the parameters of the detection model according to the prediction output result.
In a third aspect, an embodiment of the present invention provides:
a constant molar position detection model training system, comprising:
at least one memory for storing a program;
at least one processor for loading the program to perform the method for training a permanent molars location test model.
In a fourth aspect, an embodiment of the present invention provides:
a storage medium having stored therein a processor-executable program for performing the method for training a permanent molars location detection model when executed by a processor.
The embodiment of the invention has the beneficial effects that: according to the embodiment of the invention, the first feature extraction is carried out on a plurality of oral dentition images, then the first feature images are selectively activated in the channel dimension and the space dimension respectively, the second feature extraction is carried out on the first feature images after the activation is selected, then the detection frame of the position of the permanent molars is predicted according to the image features after the second feature extraction, the information of the detection frame is fused to generate the first confidence coefficient, and finally the maximum first confidence coefficient is used as the prediction output result, and the parameters of the detection model are reversely updated according to the prediction output result, so that the detection model trained by the embodiment is more targeted and more suitable for different application situations in the application process, and the accuracy of the detection result of the detection model in the application process is improved.
Drawings
Fig. 1 is a flowchart of a constant molar position detection model training method according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Referring to fig. 1, an embodiment of the present invention provides a method for training a detection model of a constant molar position, and this embodiment may be applied to a server or a background processor of various platforms, so that in an application process, a user only needs to perform a specified step operation on a human-computer interaction interface, and thus, the detection of the constant molar position can be completed.
Wherein, this embodiment includes the following steps:
s1, acquiring a plurality of oral dentition images as first images; in this step, the oral dentition image may be an image of the person to be detected, which is acquired in an actual detection process, and the acquisition process may be obtained by shooting the person to be detected through a mobile phone or other devices with a camera function.
When the plurality of oral dentition images are applied to the training process of the detection model, in some embodiments, data enhancement modes such as data inversion and rotation can be adopted to perform data enhancement on the oral dentition images so as to provide more training sample data and improve the training precision.
In some embodiments, the step S1 can be implemented by:
acquiring a plurality of oral cavity dentition images; the plurality of oral dentition images are images shot by different mobile intelligent devices, and because the sizes of the shot images are different and the input size of the neural network is fixed, the sizes of the plurality of oral dentition images are processed into preset sizes. In this process, the image size may be set to 512 × 512, that is, the size of the input image data is C × W × H, where C ═ 3 indicates that the number of input image channels is 3, and each channel sequentially indicates the color value of the red, green, and blue components, respectively; w is 512 the width of the image; h512 is the height of the image. Secondly, carrying out interlaced sampling on the dental dentition image after size processing to obtain a plurality of sub-images; for example, for a 512 x 512 size image, the original image is sampled intermittently at intervals of length 2 starting from each point in a 2 x 2 region in the upper left corner of the image, resulting in 4 sub-images of 256 x 256. And finally, taking the image formed by splicing the plurality of sub-images as a first image input by the detection model so as to fully utilize the information on the image.
And S2, performing first feature extraction on the first image to obtain a first feature image.
In this step, the first image is subjected to first feature extraction by the feature extraction module. The feature extraction module is formed by superposing 8-layer network modules, and 2 layers in the 8-layer network form a down-sampling feature extraction module group which is divided into four groups in total, and each group of operations generates a feature map with corresponding size. After feature extraction is performed through the down-sampling feature extraction module group, 4 feature maps with sizes respectively 512 × 512, 256 × 256, 128 × 128 and 64 × 64 can be obtained, and finally feature images with different sizes are used as first feature images.
S3, selectively activating the first characteristic image in a channel dimension and a space dimension respectively;
in some embodiments, selective activation of the first feature image in the channel dimension has been achieved by:
performing average pooling and maximum pooling on the first feature images;
and selectively activating the first feature images after the average pooling and the maximum pooling in the channel dimension by using an attention mechanism.
In some embodiments, selective activation of the first feature image in the spatial dimension may be performed in the following manner:
the first image after attention mechanism processing is selectively activated in the spatial dimension using a convolution kernel.
In the above embodiment, the attention mechanism and the convolution kernel form an attention mechanism module, and specifically, selective activation of channel dimensions is performed first, in this portion, the feature maps are subjected to maximum pooling and average pooling respectively, and then the feature maps obtained by the two operations are selectively activated for corresponding feature channels respectively, where selection of an activation channel is a parameter that can be learned by neural network iteration, and finally two features obtained after activation of channel dimensions respectively are superimposed and fused to be used as features after activation of channel dimensions. And secondly, selective activation of spatial dimensions, namely adopting a strategy of simultaneously utilizing the feature maps after average pooling and maximum pooling as input, utilizing a 7-7 convolution kernel to enlarge a receptive field, and mapping the output after convolution to a numerical value in [0,1] through a sigmoid activation function to represent the importance degree of each position in the feature maps relative to a prediction target.
Specifically, in selective activation of channel dimensions, the calculation is performed using equation 1:
Figure BDA0002818904930000051
wherein, FSFor features that are selectively activated through the channel dimensions,
Figure BDA0002818904930000052
representing the image features after the average pooling,
Figure BDA0002818904930000053
representing features of the image after maximum pooling, ω0And ω1Two different matrix parameters are used for representing the weight size for different channel numbers, and sigma represents an activation function sigmoid.
In this step, two identical parameters are applied to the feature maps processed in different manners, so that the parameters have stronger robustness, and the feature maps continuously applied with the two parameters can contain more neurons to fit features as complex as possible on the basis of ensuring that the size of the feature maps is unchanged. Finally, the mapping from real domain R to [0,1] is done using an activation function represented by σ to achieve normalization of the output.
In selective activation of spatial dimensions, the calculation is performed using equation 2:
Figure BDA0002818904930000054
wherein f is7*7Represents the operation of 7 by 7 convolution,
Figure BDA0002818904930000055
representing image features obtained by averaging pooling based on features obtained by selective activation of channel dimensions,
Figure BDA0002818904930000056
the method includes the steps that image features obtained by maximum pooling are shown on the basis of features obtained by channel dimension selective activation, and sigma represents an activation function sigmoid.
In this step, the reception field is enlarged by using a larger convolution kernel, so that the model is helped to deduce the importance degree of the current position according to the current pixel point and the content of the field thereof.
S4, performing second feature extraction on the selected and activated first feature image; in this step, the second feature extraction is abstract feature extraction.
S5, predicting a detection frame of the permanent molar position according to the image features extracted by the second features; in this step, it specifically predicts the position and size of the detection frame on the first image.
And S6, fusing the information of the detection frame to generate a first confidence coefficient.
In some embodiments, step S6 may be specifically implemented by:
fusing the position information, the second confidence coefficient and the image characteristic information of the detection frame;
and generating a first confidence coefficient according to the fusion result.
In this embodiment, the second confidence is the confidence of the image itself, and the first confidence is the confidence after the same image fusion processing as the second confidence.
This is not possible in practice because the existing model predicts a deviation in the detection frame, for example, two teeth of the same type are present in the same side. Thus, in some embodiments, the mechanism is screened by a detection box employing multimodal information fusion. Specifically, the existing detection frame screening method only judges according to the confidence generated by the regression model and the overlapping area between different detection frames, and ignores the image feature information of the candidate detection frame itself. In this embodiment, the position information and the image features of the candidate frame are regarded as information of two different modalities, and the information of the two different modalities is fused, so that the detection frame can be effectively screened.
Specifically, when fusing information of different modalities, formula 3 and formula 4 are adopted:
Figure BDA0002818904930000061
Figure BDA0002818904930000062
wherein f isAFor image features after fusion of confidence information, fIFor the confidence information after fusing the image features,
Figure BDA0002818904930000063
and
Figure BDA0002818904930000064
the original image characteristics and the confidence coefficient information are respectively, and the other variables are parameters self-learned by the neural network.
On the basis of fusing information of different modes, calculating by combining position information of images to obtain the confidence coefficient of the detection frame fusing multi-mode information, wherein the specific calculation process is shown as a formula 5-a formula 8:
Figure BDA0002818904930000065
Figure BDA0002818904930000066
Figure BDA0002818904930000067
fS=σ(WSftotal)+bSequation 8
Wherein,
Figure BDA0002818904930000068
representing the feature after fusing the confidence information and the image feature information,
Figure BDA0002818904930000069
for the coded detection frame position information, ftotal,kThen it is indicated at
Figure BDA00028189049300000610
Further fusing the characteristics of the position information of the detection frame fSAnd the new confidence coefficient finally obtained after the multi-mode information is fused is represented, and the rest letters or letter combinations represent parameters required by the neural network.
S7, taking the maximum first confidence as a prediction output result; in this step, the prediction output result with the highest confidence level is obtained for each region.
S8, reversely updating the parameters of the detection model according to the prediction output result, so that the detection model is more targeted and more adaptable to different application situations in the application process, and the accuracy of the detection result of the detection model in the application process is improved.
The embodiment is specifically applied, and comprises the following steps:
the method comprises the steps of collecting color images collected by the mobile intelligent equipment, wherein the angles, light rays and covered area conditions of all the images are different. In the present embodiment, the size of the images is unified to 512 × 512.
The image data set was divided into a training set and a test set at a 4:1 ratio, i.e., 3316 pictures in the data set were used to train the model and the remaining 829 pictures were used to verify the performance of the model. The final test results as shown in table 1:
TABLE 1
Method AP AP50 AP75 AR AR50 SR75 APmolar1 APmolar2 Time(ms)
Baseline(D) 46.2 92.5 40.2 48.4 99.7 70.3 92.7 94.3 6.6
Baseline(O) 44.5 89.7 37.2 46.7 98.1 64.3 - 90.1 6.1
Baseline(A+D) 46.7 93.2 40.8 48.6 99.7 70.8 93.5 95.4 8.9
Baseline(AN+D) 47.9 94.5 41.6 48.5 98.9 72.5 94.2 97.3 10.4
Baseline(AN+A+D) 49.1 95.6 42.3 49.1 99.2 72.6 96.1 98.5 12.4
In table 1, AP and AR are used as evaluation indexes, that is, the IOU values of the detection box and the target box are considered to be correct when the values are greater than a threshold value. Wherein, Baseline (D) represents the result when the first premolar and the second premolar are detected simultaneously, Baseline (O) represents the result when only the second premolar is detected, A represents adding AN attention mechanism, and AN represents adding AN A-NMS detection frame screening mechanism.
In summary, the above embodiment limits the range of target detection by adding the attention mechanism, and greatly increases the adaptability and robustness of the method. Meanwhile, a detection frame screening mechanism is set, the image features and the position features of the target are integrated to further select the detection frame, and the candidate frame matched with the image features and the position features is reserved, so that the accuracy of the detection result is improved on the same data set.
The embodiment of the invention provides a constant molar position detection model training system corresponding to the method shown in figure 1, which comprises the following steps:
the acquisition module is used for acquiring a plurality of oral cavity dentition images as first images;
the first feature extraction module is used for performing first feature extraction on the first image to obtain a first feature image;
the activation module is used for selectively activating the first characteristic image in a channel dimension and a space dimension respectively;
the second feature extraction module is used for performing second feature extraction on the selected and activated first feature image;
the prediction module is used for predicting a detection frame of the constant molar position according to the image features extracted by the second features;
the fusion module is used for fusing the information of the detection frame to generate a first confidence coefficient;
the confidence coefficient selection module is used for taking the maximum first confidence coefficient as a prediction output result;
and the parameter updating module is used for reversely updating the parameters of the detection model according to the prediction output result.
The content of the embodiment of the method of the invention is all applicable to the embodiment of the system, the function of the embodiment of the system is the same as the embodiment of the method, and the beneficial effect achieved by the embodiment of the system is the same as the beneficial effect achieved by the method.
The embodiment of the invention provides a detection model training system for a constant molar position, which comprises:
at least one memory for storing a program;
at least one processor for loading the program to perform the method for training a permanent molars location test model.
The content of the embodiment of the method of the invention is all applicable to the embodiment of the system, the function of the embodiment of the system is the same as the embodiment of the method, and the beneficial effect achieved by the embodiment of the system is the same as the beneficial effect achieved by the method.
An embodiment of the present invention provides a storage medium in which a processor-executable program is stored, which, when being executed by a processor, is configured to perform the method for training a model for detecting permanent molar positions.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A constant molar position detection model training method is characterized by comprising the following steps:
acquiring a plurality of oral cavity dentition images as first images;
performing first feature extraction on the first image to obtain a first feature image;
selectively activating the first feature image in a channel dimension and a space dimension respectively;
performing second feature extraction on the first feature image after the selection activation;
a detection frame for predicting the permanent molar position according to the image features extracted by the second features;
fusing the information of the detection frame to generate a first confidence coefficient;
taking the maximum first confidence as a prediction output result;
and reversely updating the parameters of the detection model according to the prediction output result.
2. The training method of the detection model for the permanent molar position according to claim 1, wherein the acquiring a plurality of oral dentition images as the first image comprises:
acquiring a plurality of oral cavity dentition images;
processing the sizes of the oral cavity dentition images into preset sizes;
carrying out interlaced sampling on the dental image after the size processing to obtain a plurality of sub-images;
and taking the spliced image of the plurality of sub-images as a first image.
3. The constant molar position detection model training method according to claim 1, wherein the performing a first feature extraction on the first image to obtain a first feature image comprises:
performing down-sampling feature extraction on the first image to generate feature images with different sizes;
and taking the characteristic images with different sizes as first characteristic images.
4. A method for training a permanent molars detection model according to claim 1, wherein said selectively activating the first feature image in the channel dimension comprises:
performing average pooling and maximum pooling on the first feature images;
and selectively activating the first feature images after the average pooling and the maximum pooling in the channel dimension by using an attention mechanism.
5. The training method for the detection model of the permanent molar position according to claim 4, wherein the selective activation of the first feature image in the spatial dimension is specifically:
the first image after attention mechanism processing is selectively activated in the spatial dimension using a convolution kernel.
6. A constant molar position detection model training method according to claim 1, wherein the second feature extraction is abstract feature extraction.
7. The training method of the detection model for the permanent molar position according to claim 1, wherein the fusing the information of the detection frame to generate the first confidence level comprises:
fusing the position information, the second confidence coefficient and the image characteristic information of the detection frame;
and generating a first confidence coefficient according to the fusion result.
8. A constant molar position detection model training system, comprising:
the acquisition module is used for acquiring a plurality of oral cavity dentition images as first images;
the first feature extraction module is used for performing first feature extraction on the first image to obtain a first feature image;
the activation module is used for selectively activating the first characteristic image in a channel dimension and a space dimension respectively;
the second feature extraction module is used for performing second feature extraction on the selected and activated first feature image;
the prediction module is used for predicting a detection frame of the constant molar position according to the image features extracted by the second features;
the fusion module is used for fusing the information of the detection frame to generate a first confidence coefficient;
the confidence coefficient selection module is used for taking the maximum first confidence coefficient as a prediction output result;
and the parameter updating module is used for reversely updating the parameters of the detection model according to the prediction output result.
9. A constant molar position detection model training system, comprising:
at least one memory for storing a program;
at least one processor for loading the program to perform a method of training a permanent molars location detection model according to any of claims 1-7.
10. A storage medium having stored therein a processor-executable program, wherein the processor-executable program, when executed by a processor, is adapted to perform a method of training a model for detection of permanent molar positions according to any of claims 1-7.
CN202011406995.3A 2020-12-04 2020-12-04 Method, system and storage medium for training detection model of constant molar position Active CN112561865B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011406995.3A CN112561865B (en) 2020-12-04 2020-12-04 Method, system and storage medium for training detection model of constant molar position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011406995.3A CN112561865B (en) 2020-12-04 2020-12-04 Method, system and storage medium for training detection model of constant molar position

Publications (2)

Publication Number Publication Date
CN112561865A true CN112561865A (en) 2021-03-26
CN112561865B CN112561865B (en) 2024-03-12

Family

ID=75048486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011406995.3A Active CN112561865B (en) 2020-12-04 2020-12-04 Method, system and storage medium for training detection model of constant molar position

Country Status (1)

Country Link
CN (1) CN112561865B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343853A (en) * 2021-06-08 2021-09-03 深圳格瑞健康管理有限公司 Intelligent screening method and device for child dental caries

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040029068A1 (en) * 2001-04-13 2004-02-12 Orametrix, Inc. Method and system for integrated orthodontic treatment planning using unified workstation
US20130158958A1 (en) * 2010-07-12 2013-06-20 Alain Methot Dental analysis method and system
US20180028294A1 (en) * 2016-07-27 2018-02-01 James R. Glidewell Dental Ceramics, Inc. Dental cad automation using deep learning
US20190030371A1 (en) * 2017-07-28 2019-01-31 Elekta, Inc. Automated image segmentation using dcnn such as for radiation therapy
CN110246571A (en) * 2019-07-30 2019-09-17 深圳市倍康美医疗电子商务有限公司 Tooth data processing method
US20190313963A1 (en) * 2018-04-17 2019-10-17 VideaHealth, Inc. Dental Image Feature Detection
CN110349224A (en) * 2019-06-14 2019-10-18 众安信息技术服务有限公司 A kind of color of teeth value judgment method and system based on deep learning
CN110619350A (en) * 2019-08-12 2019-12-27 北京达佳互联信息技术有限公司 Image detection method, device and storage medium
CN110930421A (en) * 2019-11-22 2020-03-27 电子科技大学 Segmentation method for CBCT (Cone Beam computed tomography) tooth image
CN111882002A (en) * 2020-08-06 2020-11-03 桂林电子科技大学 MSF-AM-based low-illumination target detection method
CN111986217A (en) * 2020-09-03 2020-11-24 北京大学口腔医学院 Image processing method, device and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040029068A1 (en) * 2001-04-13 2004-02-12 Orametrix, Inc. Method and system for integrated orthodontic treatment planning using unified workstation
US20130158958A1 (en) * 2010-07-12 2013-06-20 Alain Methot Dental analysis method and system
US20180028294A1 (en) * 2016-07-27 2018-02-01 James R. Glidewell Dental Ceramics, Inc. Dental cad automation using deep learning
US20190030371A1 (en) * 2017-07-28 2019-01-31 Elekta, Inc. Automated image segmentation using dcnn such as for radiation therapy
US20190313963A1 (en) * 2018-04-17 2019-10-17 VideaHealth, Inc. Dental Image Feature Detection
CN110349224A (en) * 2019-06-14 2019-10-18 众安信息技术服务有限公司 A kind of color of teeth value judgment method and system based on deep learning
CN110246571A (en) * 2019-07-30 2019-09-17 深圳市倍康美医疗电子商务有限公司 Tooth data processing method
CN110619350A (en) * 2019-08-12 2019-12-27 北京达佳互联信息技术有限公司 Image detection method, device and storage medium
CN110930421A (en) * 2019-11-22 2020-03-27 电子科技大学 Segmentation method for CBCT (Cone Beam computed tomography) tooth image
CN111882002A (en) * 2020-08-06 2020-11-03 桂林电子科技大学 MSF-AM-based low-illumination target detection method
CN111986217A (en) * 2020-09-03 2020-11-24 北京大学口腔医学院 Image processing method, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
段志伟;: "基于特征融合和中心预测的目标检测算法", 现代计算机, no. 09 *
王心醉;董宁宁;李欢利;: "基于SIFT算法的图像特征点提取和匹配研究", 南京医科大学学报(自然科学版), no. 02 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113343853A (en) * 2021-06-08 2021-09-03 深圳格瑞健康管理有限公司 Intelligent screening method and device for child dental caries

Also Published As

Publication number Publication date
CN112561865B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
US20210174477A1 (en) Domain specific image quality assessment
US20210022833A1 (en) Generation of synthetic post treatment images of teeth
US20220218449A1 (en) Dental cad automation using deep learning
CN108416377B (en) Information extraction method and device in histogram
CN113194872B (en) Identification device, scanner system, and identification method
CN110473243B (en) Tooth segmentation method and device based on depth contour perception and computer equipment
WO2020063986A1 (en) Method and apparatus for generating three-dimensional model, device, and storage medium
CN105654436A (en) Backlight image enhancement and denoising method based on foreground-background separation
CN115362451A (en) System and method for constructing three-dimensional model from two-dimensional image
DE102012110491A1 (en) Method and device for cosmetic tooth analysis and dental consultation
CN106875361A (en) A kind of method that poisson noise is removed based on depth convolutional neural networks
US11887209B2 (en) Method for generating objects using an hourglass predictor
US20210097730A1 (en) Face Image Generation With Pose And Expression Control
CN107194426A (en) A kind of image-recognizing method based on Spiking neutral nets
CN107403065A (en) Detection method, device and the server of dental problems
CN112561865A (en) Constant molar position detection model training method, system and storage medium
CN106898011A (en) A kind of method that convolutional neural networks convolution nuclear volume is determined based on rim detection
CN103679710B (en) The weak edge detection method of image based on multilayer neuron pool discharge information
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
CN114586069A (en) Method for generating dental images
CN106023238A (en) Color data calibration method for camera module
CN108735010A (en) A kind of intelligent English teaching system for English teaching
JP6831433B2 (en) Identification device, tooth type identification system, identification method, and identification program
US12020372B2 (en) Method for analyzing a photo of a dental arch
CN113343853B (en) Intelligent screening method and device for dental caries of children

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 805, Building B, Second Unified Building, Houhai Neighborhood Committee, No.1 Nanyou Huaming Road, Nanshan District, Guangdong Province, 518000

Applicant after: Shenzhen Gree Health Technology Co.,Ltd.

Address before: 805, block B, No.2 Tongjian building, Houhai neighborhood committee, No.1 Huayou Huaming Road, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Gree Health Management Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant