CN113343853B - Intelligent screening method and device for dental caries of children - Google Patents

Intelligent screening method and device for dental caries of children Download PDF

Info

Publication number
CN113343853B
CN113343853B CN202110640269.6A CN202110640269A CN113343853B CN 113343853 B CN113343853 B CN 113343853B CN 202110640269 A CN202110640269 A CN 202110640269A CN 113343853 B CN113343853 B CN 113343853B
Authority
CN
China
Prior art keywords
image
target
teeth
training
tooth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110640269.6A
Other languages
Chinese (zh)
Other versions
CN113343853A (en
Inventor
黄少宏
赵志广
刘勇
范卫华
李菊红
李剑波
武剑
林良强
熊丹
徐启胜
吴俊银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Gree Health Technology Co ltd
Original Assignee
Shenzhen Gree Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Gree Health Technology Co ltd filed Critical Shenzhen Gree Health Technology Co ltd
Priority to CN202110640269.6A priority Critical patent/CN113343853B/en
Publication of CN113343853A publication Critical patent/CN113343853A/en
Application granted granted Critical
Publication of CN113343853B publication Critical patent/CN113343853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention provides an intelligent screening method for dental caries of children, which comprises the following steps of S1: acquiring an original image by using an image acquisition device; the original image includes, but is not limited to, image information of the target tooth; s2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; data cleaning is carried out on the pre-training image, and the positions of target teeth in the cleaned image information are marked; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales; s3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature map based on the weights corresponding to each scale feature map and each scale feature map, wherein in the practical application process, shooting and classifying treatment can be better performed on target teeth, so that subsequent management and targeted treatment are facilitated.

Description

Intelligent screening method and device for dental caries of children
[ Technical field ]
The invention relates to the technical field of intelligent screening methods for dental caries of children, in particular to an intelligent screening method and device for dental caries of children, which have outstanding application effects and high accuracy.
[ Background Art ]
Caries is a disease of chronic progressive destruction of hard dental tissues under the combined action of bacteria as a main factor, and children are high-incidence people of oral diseases due to the fact that children are in the age of deciduous teeth replacement and permanent teeth eruption. The reasons mainly include two aspects: on one hand, the tooth-brushing teeth are in a special period of tooth-changing, so that the tooth-brushing teeth are prone to caries, on the other hand, children generally like to eat sweet, soft and sticky foods, and the foods are easy to stick to the teeth, but the habit of brushing teeth of the children is very little, and the teeth can be effectively cleaned. In addition, the newly erupted permanent teeth of children are not developed completely, the mineralization degree of the enamel is relatively low, the resistance to acid secretion generated by bacteria is low, and dental caries is more likely to occur.
At present, the pit and groove sealing is a main solution to the problem of dental caries, namely, the pit and groove sealing refers to a method for preventing dental caries by coating sealing materials on pit and groove point gaps of occlusal surfaces and buccal lingual surfaces of dental crowns without damaging dental tissues and preventing dental caries caused by cariogenic bacteria and acid metabolites. After the oral cavity constant molar position is obtained, the oral cavity constant molar is classified, so that whether decayed teeth exist or not and whether a pit and a groove are to be closed or not are judged, and a user can be helped to better treat and prevent the decayed teeth.
However, the existing classification model does not show generalization and systematicness for detecting constant molars due to the lack of extraction of related medical features, and it is difficult to accurately classify teeth.
[ Summary of the invention ]
In order to overcome the problems in the prior art, the invention provides the intelligent screening method for the dental caries of children, which has the advantages of outstanding application effect and high accuracy.
The technical proposal for solving the technical problem is to provide an intelligent screening method for children dental caries, which comprises the following steps,
S1: acquiring an original image by using an image acquisition device; the original image includes, but is not limited to, image information of the target tooth;
S2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; performing data cleaning on the pre-training image, and deleting the pre-training image which is unclear, deviates from the oral cavity and is blocked in the tooth area by using the data cleaning; marking the position of the target tooth in the cleaned image information; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales;
S3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature maps based on the weights corresponding to each scale feature map and each scale feature map; fusing the features respectively corresponding to the N scale feature maps, further predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the pre-training image according to the detection frames; the detection frame is a rectangular box and is in a form of a plurality of groups, including a leftmost coordinate value, a topmost coordinate value, a width value and a height value;
S4: utilizing a target detection model to further analyze and detect the pre-training image and the target tooth position information, and further generating an image to be processed;
S5: the image to be processed is conveyed to a target classification model, and tooth type classification is carried out according to tooth position information in the image to be processed by using the target classification model, so that at least one of decayed teeth, non-decayed teeth, teeth needing to be closed by a pit and a groove or teeth not needing to be closed by the pit and the groove is obtained; the target classification model adopts a residual neural network resnet model, image characteristics are extracted through a multi-layer network, and finally probability values of each class are output through a softmax function;
s6: the classification information of the target teeth is sent to a corresponding control terminal or cloud server, and diagnosis and treatment management, appointment management, scheduling management, follow-up visit management, consumable management and district user information management are carried out by utilizing communication connection between the control terminal or cloud server and medical institutions, schools and government institutions; determining whether to start the reservation platform to reserve treatment according to the specific rechecking condition;
and S7, intelligently screening the dental caries of the children.
Preferably, the object detection model in the step S4 includes at least an enhancement module and a classification module; the enhancement module includes a generator and a discriminator;
After the training image is acquired in the step S2, the training image is transmitted to the generator, so that similar image information of the training image is obtained; synchronously conveying the similar image and the training image to the discriminator to discriminate the similar image and the training image; classifying target teeth in the training image by using the classifier to obtain a trained classifier;
the target teeth are at least one of decayed teeth, non-decayed teeth, teeth requiring pit and fissure sealing or teeth not requiring pit and fissure sealing;
Acquiring the similar image information comprises acquiring label information corresponding to a target tooth in the training image; the generator generates a similar image consistent with the label category according to label information corresponding to the target tooth of the training image;
adding the same-category style loss into the loss of the generator to obtain the similar image; the specific peer style loss is calculated using the following calculation formula,
Wherein T represents the original data, F represents the generated image, ijk corresponds to the length, width and subscript of the channel of the feature scale map, G represents the Gram matrix, and represents the pattern of the image.
Preferably, the image acquisition device in the step S1 includes an oral image acquisition robot, an electric toothbrush with a camera function, or an oral supporter with a camera function; the oral cavity image acquisition robot, the electric toothbrush with the camera shooting function or the oral cavity support with the camera shooting function are in communication connection with the control terminal or the cloud server for transmitting oral cavity photos.
Preferably, in the step S4, a selective enhancement process is performed on the training image by using an enhancement module; the method specifically comprises the steps of carrying out contrast and brightness evaluation on a target tooth image in the training image to obtain corresponding contrast deviation degree and brightness deviation degree; specifically, when the contrast deviation degree is greater than a preset value and/or the brightness deviation degree is greater than a preset value, the target tooth image is subjected to enhancement processing.
Preferably, the image acquisition device comprises a hand-held part which is convenient for a user to hold and an acquisition part for acquiring oral photos of the user; the collecting part is matched with the shape of the oral cavity of the user, and a supporting frame for supporting the oral cavity of the user is arranged on the collector; the camera is used for photographing the oral cavity of the user;
The system also comprises a main processor, an LED light supplementing lamp, a communication unit and a storage unit, wherein the main processor, the LED light supplementing lamp, the communication unit is used for being in communication connection with the control terminal or the cloud server, and the storage unit is used for storing photo image information;
the hand-held part is connected with the collecting part through a movable connecting rod capable of adjusting the angle of the collecting part.
Preferably, in the step S2, the process of performing data cleaning on the pre-training image and labeling the position of the target tooth in the cleaned image information specifically includes: marking the first constant grinding teeth and the second front constant grinding teeth in the obtained cleaned tooth image sample by using software, wherein the target teeth are the first constant grinding teeth, and the marked second constant grinding teeth play a role in assisting in confirming the positions of the first constant grinding teeth;
after the labeling is completed, a text document with the same name as the image file is further generated and stored under the same directory, wherein the content of the text document is the position and size information of the labeling;
The Labeling software is Labeling image Labeling software;
meanwhile, before inputting the data sample of the target detection model, carrying out normalization processing on the marked oral cavity image sample; dividing sample images of all models to be input into a training set and a testing set according to a certain proportion; the test set is used for training the target detection model, and the test set is used for verifying the trained target detection model;
Solving the average value of pixels of all images in the training set, and solving the average value of corresponding channels of the pixels on the same spatial position;
Zero-equalizing the images of the training set, wherein the value of each pixel in the images on the corresponding channel is subtracted by the value of the corresponding position of the matrix.
Preferably, in the step S3, a plurality of weight values obtained by each scale feature map are multiplied by the scale feature map, and different weights are calculated at corresponding positions on each map;
The current detection model also comprises a full connection layer, the full connection layer is used for calculating the feature images respectively to obtain a plurality of three-dimensional arrays, and then the feature images are fused, namely the plurality of three-dimensional arrays are weighted and summed to obtain final feature information;
The classification result is obtained by calculation of the full connection layer, so that the target constant molar marked before and the constant molar used for auxiliary detection can be detected.
Preferably, the enhancement process specifically includes offline enhancement and online enhancement; the offline enhancement includes directly processing the data sets, the number of data being changed to an enhancement factor multiplied by the number of original data sets; the online enhancement refers to the enhancement of the data of the batch after the data of the batch is obtained, and comprises, but is not limited to, rotation, translation and turnover.
Preferably, in the step S6, determining whether to start the reservation platform to perform treatment reservation according to the specific review condition includes evaluating the treatment emergency degree of the tooth information of the user, and notifying the user to perform timely treatment through a micro-communication applet, a short message and a telephone; and synchronously pushing the recommendation information of the related medical institutions and doctors.
An intelligent screening device for dental caries of children is characterized in that: comprising at least one processor coupled to a memory; the processor is configured to execute a computer program or instructions stored in the memory, and the processor performs the method described above.
Compared with the prior art, the intelligent screening method for the dental caries of the children acquires an original image by adopting S1 and utilizing image acquisition equipment; the original image includes, but is not limited to, image information of the target tooth; s2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; data cleaning is carried out on the pre-training image, and the positions of target teeth in the cleaned image information are marked; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales; n is more than or equal to 2, N is an integer, different scale feature maps correspond to different scales, each image to be input has three dimensional values of a channel, a width and a height, namely C multiplied by W multiplied by H, wherein the channel represents color values on three channels of red, green and blue, the width is the width of the image, and the height is the height of the image; s3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature maps based on the weights corresponding to each scale feature map and each scale feature map; fusing the features respectively corresponding to the N scale feature maps, further predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the pre-training image according to the detection frames; s4: utilizing a target detection model to further analyze and detect the pre-training image and the target tooth position information, and further generating an image to be processed; s5: conveying the image to be processed to a target classification model, and classifying tooth types by using the target classification model according to tooth position information in the image to be processed; deriving that the target tooth is at least one of caries, non-caries, a tooth requiring sulcus closure or a tooth requiring no sulcus closure; s6: the classification information of the target teeth is sent to a corresponding control terminal or cloud server, and diagnosis and treatment management, appointment management, scheduling management, follow-up visit management, consumable management and district user information management are carried out by utilizing communication connection between the control terminal or cloud server and medical institutions, schools and government institutions; determining whether to start the reservation platform to reserve treatment according to the specific rechecking condition; in the practical application process, the target teeth can be better photographed and classified, so that the subsequent management and targeted treatment are facilitated.
[ Description of the drawings ]
FIG. 1 is a schematic flow chart of an intelligent screening method for dental caries in children according to the present invention.
Detailed description of the preferred embodiments
For the purpose of making the technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It will be understood that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When a component is considered to be "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
With the vigorous development of information technologies such as mobile internet, sensors, wearable devices, internet of things and the like,
The wide popularization of various intelligent terminals such as intelligent mobile phones, the capability of residents for acquiring and providing medical health data is greatly enhanced, the sharing and exchanging capability of data and information is greatly improved, and the propagation means of health information are increasingly diversified. The improvement of information technologies such as cloud platform, cloud computing, big data, machine learning and the like and computing power enables all industries to start to borrow power from the Internet, and the new opportunities for the development of the respective industries are caught. With the rapid development of large-scale image data generation and computing power, artificial intelligence, particularly a deep learning technology, has made remarkable research progress in the relevant fields of computer vision and the like, and provides a great deal of early experience for the application of the artificial intelligence to the development of an artificial intelligence-based oral constant molar recognition system.
The related research at present lacks the pertinence and the application situation of medical health, the related characteristics of dental caries are not extracted for detection, and the model is enhanced according to a large amount of rich data, so that the specificity of a data set in the medical field is ignored, and the effects are difficult to obtain in the data that teeth are too dense and the sizes of image individuals are different to a certain extent. Furthermore, from a diagnostic point of view, the selection of classification models is also particularly important.
The existing classification model has no generalization and systematicness for detecting constant molar due to the extraction of the related medical features, and is difficult to accurately classify the teeth.
Referring to fig. 1 to 1, the intelligent screening method 1 for dental caries in children of the present invention comprises the following steps,
S1: acquiring an original image by using an image acquisition device; the original image includes, but is not limited to, image information of the target tooth;
S2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; performing data cleaning on the pre-training image, and deleting the pre-training image which is unclear, deviates from the oral cavity and is blocked in the tooth area by using the data cleaning; marking the position of the target tooth in the cleaned image information; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales;
n is more than or equal to 2, N is an integer, and different scale feature maps correspond to different scales.
Each image to be input has three dimensional values of channel, width and height, namely, c×w×h, wherein the channel represents color values on three channels of red, green and blue, the width is the width of the image, and the height is the height of the image.
S3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature maps based on the weights corresponding to each scale feature map and each scale feature map; fusing the features respectively corresponding to the N scale feature maps, further predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the pre-training image according to the detection frames; the detection frame is a rectangular box and is in a form of a plurality of groups, including a leftmost coordinate value, a topmost coordinate value, a width value and a height value; the leftmost coordinate value, the uppermost coordinate value, the width value and the height value are all in units of pixels;
S4: utilizing a target detection model to further analyze and detect the pre-training image and the target tooth position information, and further generating an image to be processed;
S5: the image to be processed is conveyed to a target classification model, and tooth type classification is carried out according to tooth position information in the image to be processed by using the target classification model, so that at least one of decayed teeth, non-decayed teeth, teeth needing to be closed by a pit and a groove or teeth not needing to be closed by the pit and the groove is obtained; the target classification model adopts a residual neural network resnet model, image characteristics are extracted through a multi-layer network, and finally probability values of each class are output through a softmax function; the residual neural network resnet model uses multiple parametric network layers to learn the dispersion between input and output, i.e., H (X) -X, i.e., learn X- > (H (X) -X) +x. Wherein X is a direct identity mapping, and H (X) -X is the residual error between input and output to be learned by the network layer with parameters.
S6: the classification information of the target teeth is sent to a corresponding control terminal or cloud server, and diagnosis and treatment management, appointment management, scheduling management, follow-up visit management, consumable management and district user information management are carried out by utilizing communication connection between the control terminal or cloud server and medical institutions, schools and government institutions; determining whether to start the reservation platform to reserve treatment according to the specific rechecking condition;
and S7, intelligently screening the dental caries of the children.
The application acquires an original image by adopting S1 by using an image acquisition device; the original image includes, but is not limited to, image information of the target tooth; s2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; data cleaning is carried out on the pre-training image, and the positions of target teeth in the cleaned image information are marked; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales; n is more than or equal to 2, N is an integer, different scale feature maps correspond to different scales, each image to be input has three dimensional values of a channel, a width and a height, namely C multiplied by W multiplied by H, wherein the channel represents color values on three channels of red, green and blue, the width is the width of the image, and the height is the height of the image; s3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature maps based on the weights corresponding to each scale feature map and each scale feature map; fusing the features respectively corresponding to the N scale feature maps, further predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the pre-training image according to the detection frames; s4: utilizing a target detection model to further analyze and detect the pre-training image and the target tooth position information, and further generating an image to be processed; s5: conveying the image to be processed to a target classification model, and classifying tooth types by using the target classification model according to tooth position information in the image to be processed; deriving that the target tooth is at least one of caries, non-caries, a tooth requiring sulcus closure or a tooth requiring no sulcus closure; s6: the classification information of the target teeth is sent to a corresponding control terminal or cloud server, and diagnosis and treatment management, appointment management, scheduling management, follow-up visit management, consumable management and district user information management are carried out by utilizing communication connection between the control terminal or cloud server and medical institutions, schools and government institutions; determining whether to start the reservation platform to reserve treatment according to the specific rechecking condition; in the practical application process, the target teeth can be better photographed and classified, so that the subsequent management and targeted treatment are facilitated.
In the embodiment of the application, the original image can be acquired by adopting a WeChat public number and a small program, wherein the public number can be a tooth protecting bar, and the steps for acquiring the original image are as follows:
WeChat searching public number tooth protection and paying attention to the tooth protection; clicking tooth decay screening, collecting and reserving; the user authorizes new student files and collects information; uploading four user dental photos: upper right, upper left, lower right, lower left.
In a preferred embodiment, the object detection model in step S4 includes at least an enhancement module and a classification module; the enhancement module includes a generator and a discriminator;
After the training image is acquired in the step S2, the training image is transmitted to the generator, so that similar image information of the training image is obtained; synchronously conveying the similar image and the training image to the discriminator to discriminate the similar image and the training image; classifying target teeth in the training image by using the classifier to obtain a trained classifier;
The generator is used for generating enhancement data, namely similar data, and generates images of the same category by taking a real image as a reference when generating similar images, so that the enhancement images do not deviate from the original images. For example, a "caries" class true image only generates a "caries" class similar image. The true image of the "non-caries" category only generates a similar image of the "non-caries" category.
The discriminator is to discriminate the true image from the similar image. So that the discriminator training generator improves its ability to generate more realistic images.
The target teeth are at least one of decayed teeth, non-decayed teeth, teeth requiring pit and fissure sealing or teeth not requiring pit and fissure sealing;
Acquiring the similar image information comprises acquiring label information corresponding to a target tooth in the training image; the generator generates a similar image consistent with the label category according to label information corresponding to the target tooth of the training image;
adding the same-category style loss into the loss of the generator to obtain the similar image; the specific peer style loss is calculated using the following calculation formula,
Wherein T represents the original data, F represents the generated image, ijk corresponds to the length, width and subscript of the channel of the feature scale map, G represents the Gram matrix, and represents the pattern of the image.
In a preferred embodiment, the image capturing device in step S1 includes an oral cavity image capturing robot, an electric toothbrush with a camera function, or an oral cavity supporter with a camera function; the oral cavity image acquisition robot, the electric toothbrush with the camera shooting function or the oral cavity support with the camera shooting function are in communication connection with the control terminal or the cloud server for transmitting oral cavity photos.
In a preferred embodiment, in the step S4, the training image is selectively enhanced by an enhancement module; the method specifically comprises the steps of carrying out contrast and brightness evaluation on a target tooth image in the training image to obtain corresponding contrast deviation degree and brightness deviation degree; specifically, when the contrast deviation degree is greater than a preset value and/or the brightness deviation degree is greater than a preset value, the target tooth image is subjected to enhancement processing.
In a preferred embodiment, the image acquisition device comprises a hand-held part which is convenient for a user to hold and an acquisition part for acquiring oral photos of the user; the collecting part is matched with the shape of the oral cavity of the user, and a supporting frame for supporting the oral cavity of the user is arranged on the collector; the camera is used for photographing the oral cavity of the user;
The system also comprises a main processor, an LED light supplementing lamp, a communication unit and a storage unit, wherein the main processor, the LED light supplementing lamp, the communication unit is used for being in communication connection with the control terminal or the cloud server, and the storage unit is used for storing photo image information;
the hand-held part is connected with the collecting part through a movable connecting rod capable of adjusting the angle of the collecting part.
In a preferred embodiment, in the step S2, the process of performing data cleaning on the pre-training image and marking the position of the target tooth in the cleaned image information specifically includes: marking the first constant grinding teeth and the second front constant grinding teeth in the obtained cleaned tooth image sample by using software, wherein the target teeth are the first constant grinding teeth, and the marked second constant grinding teeth play a role in assisting in confirming the positions of the first constant grinding teeth;
Because most of the collected tooth image data are color images shot by the terminal equipment, the image quality is uneven, so after the tooth protection management background obtains the tooth image data, the obtained tooth image data can be cleaned, namely, the pictures with unclear images deviating from the oral cavity, blocked tooth areas and the like are deleted, and finally, a more accurate tooth image sample is obtained.
The target detection model requires a large amount of pre-training image acquisition prior to training, the pre-training image being dental image data comprising a plurality of images containing target teeth, which in the embodiment of the application are permanent molars.
The pre-training image data acquisition means may include any means by which a dental image data set may be acquired.
After the labeling is completed, a text document with the same name as the image file is further generated and stored under the same directory, wherein the content of the text document is the position and size information of the labeling;
The Labeling software is Labeling image Labeling software;
meanwhile, before inputting the data sample of the target detection model, carrying out normalization processing on the marked oral cavity image sample; dividing sample images of all models to be input into a training set and a testing set according to a certain proportion; the test set is used for training the target detection model, and the test set is used for verifying the trained target detection model;
Solving the average value of pixels of all images in the training set, and solving the average value of corresponding channels of the pixels on the same spatial position;
Zero-equalizing the images of the training set, wherein the value of each pixel in the images on the corresponding channel is subtracted by the value of the corresponding position of the matrix.
In the embodiment of the application, 4145 tooth sample images are divided into a training set and a testing set according to the ratio of 4:1, namely 3316 images in the data set are used for training a model, and the rest 829 images are used for verifying the performance of a target detection model. The 3316 is also used to normalize the images of the training model to better learn the model.
In a preferred embodiment, in the step S3, a plurality of weight values obtained by each scale feature map are multiplied by the scale feature map, and different weights are calculated at corresponding positions on each map;
The current detection model also comprises a full connection layer, the full connection layer is used for calculating the feature images respectively to obtain a plurality of three-dimensional arrays, and then the feature images are fused, namely the plurality of three-dimensional arrays are weighted and summed to obtain final feature information;
The classification result is obtained by calculation of the full connection layer, so that the target constant molar marked before and the constant molar used for auxiliary detection can be detected.
In a preferred embodiment, the enhancement process specifically includes offline enhancement and online enhancement; the offline enhancement includes directly processing the data sets, the number of data being changed to an enhancement factor multiplied by the number of original data sets; the online enhancement refers to the enhancement of the data of the batch after the data of the batch is obtained, and comprises, but is not limited to, rotation, translation and turnover.
In a preferred embodiment, in the step S6, determining whether to start the reservation platform to reserve treatment according to the specific review condition includes evaluating the emergency degree of treatment of the tooth information of the user, and notifying the user to perform timely treatment through a WeChat applet, a short message, and a telephone; and synchronously pushing the recommendation information of the related medical institutions and doctors.
An intelligent screening device for dental caries of children is characterized in that: comprising at least one processor coupled to a memory; the processor is configured to execute a computer program or instructions stored in the memory, and the processor performs the method described above.
The processor is used for acquiring an original image, determining the position of a target tooth in the original image by utilizing the target detection model, generating an image to be processed, and determining that the target tooth in the image to be processed is at least one of decayed tooth, non-decayed tooth, teeth needing to be closed by a pit and a groove and teeth not needing to be closed by the pit and the groove according to the target classification model.
The processor is also used for cleaning the original image data and labeling the cleaned data with the target tooth position; determining N scale feature images of the original image according to the original image, and determining weights corresponding to different positions in each scale feature image respectively; determining the corresponding features of each scale feature map based on the weight corresponding to each scale feature map and each scale feature map; and respectively fusing the features corresponding to the N scale feature maps, predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the original image according to the detection frames corresponding to the target teeth.
The processor is also used for acquiring a training image, inputting the training image into the generator and obtaining a similar image of the training image; inputting the similar image and the training image to a discriminator, and discriminating the similar image and the training image; and classifying the target teeth in the training image by using a classifier to obtain trained classifications.
The processor is also used for inputting the original image into the target detection model to obtain N scale feature images.
The processor is further configured to cause the generator to generate a similar image consistent with the tag class from the tag corresponding to the target tooth of the training image.
Compared with the prior art, the intelligent screening method 1 for the dental caries of the children acquires an original image by using an image acquisition device through S1; the original image includes, but is not limited to, image information of the target tooth; s2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; data cleaning is carried out on the pre-training image, and the positions of target teeth in the cleaned image information are marked; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales; n is more than or equal to 2, N is an integer, different scale feature maps correspond to different scales, each image to be input has three dimensional values of a channel, a width and a height, namely C multiplied by W multiplied by H, wherein the channel represents color values on three channels of red, green and blue, the width is the width of the image, and the height is the height of the image; s3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature maps based on the weights corresponding to each scale feature map and each scale feature map; fusing the features respectively corresponding to the N scale feature maps, further predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the pre-training image according to the detection frames; s4: utilizing a target detection model to further analyze and detect the pre-training image and the target tooth position information, and further generating an image to be processed; s5: conveying the image to be processed to a target classification model, and classifying tooth types by using the target classification model according to tooth position information in the image to be processed; deriving that the target tooth is at least one of caries, non-caries, a tooth requiring sulcus closure or a tooth requiring no sulcus closure; s6: the classification information of the target teeth is sent to a corresponding control terminal or cloud server, and diagnosis and treatment management, appointment management, scheduling management, follow-up visit management, consumable management and district user information management are carried out by utilizing communication connection between the control terminal or cloud server and medical institutions, schools and government institutions; determining whether to start the reservation platform to reserve treatment according to the specific rechecking condition; in the practical application process, the target teeth can be better photographed and classified, so that the subsequent management and targeted treatment are facilitated.
The embodiments of the present invention described above do not limit the scope of the present invention. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention as set forth in the appended claims.

Claims (10)

1. An intelligent screening method for dental caries of children is characterized in that: comprises the steps of,
S1: acquiring an original image by using an image acquisition device; the original image includes, but is not limited to, image information of the target tooth;
S2: acquiring a pre-training image, wherein the pre-training image comprises the acquired image information of the target teeth; performing data cleaning on the pre-training image, and deleting the pre-training image which is unclear, deviates from the oral cavity and is blocked in the tooth area by using the data cleaning; marking the position of the target tooth in the cleaned image information; further determining N scale feature images of the pre-training image, wherein different scale feature images correspond to different scales;
S3: further determining weights corresponding to different positions in each scale feature map, and determining features corresponding to the scale feature maps based on the weights corresponding to each scale feature map and each scale feature map; fusing the features respectively corresponding to the N scale feature maps, further predicting a plurality of detection frames corresponding to the target teeth, and determining the positions of the target teeth in the pre-training image according to the detection frames; the detection frame is a rectangular box and is in a form of a plurality of groups, including a leftmost coordinate value, a topmost coordinate value, a width value and a height value;
S4: utilizing a target detection model to further analyze and detect the pre-training image and the target tooth position information, and further generating an image to be processed;
S5: the image to be processed is conveyed to a target classification model, and tooth type classification is carried out according to tooth position information in the image to be processed by using the target classification model, so that at least one of decayed teeth, non-decayed teeth, teeth needing to be closed by a pit and a groove or teeth not needing to be closed by the pit and the groove is obtained; the target classification model adopts a residual neural network resnet model, image characteristics are extracted through a multi-layer network, and finally probability values of each class are output through a softmax function;
s6: the classification information of the target teeth is sent to a corresponding control terminal or cloud server, and diagnosis and treatment management, appointment management, scheduling management, follow-up visit management, consumable management and district user information management are carried out by utilizing communication connection between the control terminal or cloud server and medical institutions, schools and government institutions; determining whether to start the reservation platform to reserve treatment according to the specific rechecking condition;
and S7, intelligently screening the dental caries of the children.
2. A method for intelligently screening dental caries in children as defined in claim 1, wherein: the target detection model in the step S4 at least comprises an enhancement module and a classification module; the enhancement module includes a generator and a discriminator;
after the training image is acquired in the step S2, the training image is transmitted to the generator, so that similar image information of the training image is obtained; synchronously conveying the similar image and the training image to the discriminator to discriminate the similar image and the training image; classifying target teeth in the training image by using a classifier to obtain a trained classifier;
the target teeth are at least one of decayed teeth, non-decayed teeth, teeth requiring pit and fissure sealing or teeth not requiring pit and fissure sealing;
acquiring the similar image information comprises acquiring label information corresponding to a target tooth in the training image; the generator generates a similar image consistent with the label category according to the label information corresponding to the target tooth of the training image;
adding the same-category style loss into the loss of the generator to obtain the similar image; the specific peer style loss is calculated using the following calculation formula,
Wherein T represents the original data, F represents the generated image, ijk corresponds to the length, width and subscript of the channel of the feature scale map, G represents the Gram matrix, and represents the pattern of the image.
3. A method for intelligently screening dental caries in children as defined in claim 1, wherein: the image acquisition device in the step S1 comprises an oral cavity image acquisition robot, an electric toothbrush with a camera shooting function or an oral cavity support with a camera shooting function; the oral cavity image acquisition robot, the electric toothbrush with the camera shooting function or the oral cavity support with the camera shooting function are in communication connection with the control terminal or the cloud server for transmitting oral cavity photos.
4. A method for intelligently screening dental caries in children as defined in claim 2, wherein: in the step S4, the training image is selectively enhanced by an enhancement module; the method specifically comprises the steps of carrying out contrast and brightness evaluation on a target tooth image in the training image to obtain corresponding contrast deviation degree and brightness deviation degree; specifically, when the contrast deviation degree is greater than a preset value and/or the brightness deviation degree is greater than a preset value, the target tooth image is subjected to enhancement processing.
5. A method for intelligently screening dental caries in children as defined in claim 3, wherein: the image acquisition equipment comprises a handheld part which is convenient for a user to hold and an acquisition part for acquiring oral photos of the user; the collecting part is matched with the shape of the oral cavity of the user, and a supporting frame for supporting the oral cavity of the user is arranged on the collecting part; the camera is used for photographing the oral cavity of the user;
The system also comprises a main processor, an LED light supplementing lamp, a communication unit and a storage unit, wherein the main processor, the LED light supplementing lamp, the communication unit is used for being in communication connection with the control terminal or the cloud server, and the storage unit is used for storing photo image information;
the hand-held part is connected with the collecting part through a movable connecting rod capable of adjusting the angle of the collecting part.
6. A method for intelligently screening dental caries in children as defined in claim 1, wherein: in the step S2, the process of cleaning the data of the pre-training image and marking the position of the target tooth in the cleaned image information specifically includes: marking the first constant grinding teeth and the second front constant grinding teeth in the obtained cleaned tooth image sample by using software, wherein the target teeth are the first constant grinding teeth, and the marked second constant grinding teeth play a role in assisting in confirming the positions of the first constant grinding teeth;
after the labeling is completed, a text document with the same name as the image file is further generated and stored under the same directory, wherein the content of the text document is the position and size information of the labeling;
The Labeling software is Labeling image Labeling software;
meanwhile, before inputting the data sample of the target detection model, carrying out normalization processing on the marked oral cavity image sample; dividing sample images of all models to be input into a training set and a testing set according to a certain proportion; the test set is used for training the target detection model, and the test set is used for verifying the trained target detection model;
Solving the average value of pixels of all images in the training set, and solving the average value of corresponding channels of the pixels on the same spatial position;
zero-equalizing the images of the training set, wherein the value of each pixel in the images on the corresponding channel is subtracted by the value of the corresponding position of the Gram matrix.
7. A method for intelligently screening dental caries in children as defined in claim 6, wherein: in the step S3, multiplying a plurality of weight values obtained by each scale feature map by the scale feature map, and calculating different weights at corresponding positions on each map;
The target detection model also comprises a full connection layer, the full connection layer is used for calculating the feature images respectively to obtain a plurality of three-dimensional arrays, and then the feature images are fused, namely the plurality of three-dimensional arrays are weighted and summed to obtain final feature information;
And the classification result is obtained by calculation of the full connection layer, so that the target constant molar marked before and the constant molar used for auxiliary detection can be detected.
8. An intelligent screening method for dental caries in children according to claim 4, wherein: the enhancement processing specifically comprises off-line enhancement and on-line enhancement; the offline enhancement includes directly processing the data sets, the number of data being changed to an enhancement factor multiplied by the number of original data sets; the online enhancement refers to the enhancement of the data of the batch after the data of the batch is obtained, and comprises, but is not limited to, rotation, translation and turnover.
9. A method for intelligently screening dental caries in children as defined in claim 1, wherein: in the step S6, determining whether to start the reservation platform to reserve treatment according to the specific review condition includes evaluating the emergency degree of treatment of the tooth information of the user, and notifying the user to perform timely treatment through a micro-message applet, a short message and a telephone; and synchronously pushing the recommendation information of the related medical institutions and doctors.
10. An intelligent screening device for dental caries of children is characterized in that: comprising at least one processor coupled to a memory; the processor being adapted to execute a computer program or instructions stored in a memory, the processor performing the method of any one of the preceding claims 1 to 9.
CN202110640269.6A 2021-06-08 2021-06-08 Intelligent screening method and device for dental caries of children Active CN113343853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110640269.6A CN113343853B (en) 2021-06-08 2021-06-08 Intelligent screening method and device for dental caries of children

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110640269.6A CN113343853B (en) 2021-06-08 2021-06-08 Intelligent screening method and device for dental caries of children

Publications (2)

Publication Number Publication Date
CN113343853A CN113343853A (en) 2021-09-03
CN113343853B true CN113343853B (en) 2024-06-14

Family

ID=77475408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110640269.6A Active CN113343853B (en) 2021-06-08 2021-06-08 Intelligent screening method and device for dental caries of children

Country Status (1)

Country Link
CN (1) CN113343853B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643297B (en) * 2021-10-18 2021-12-21 四川大学 Computer-aided age analysis method based on neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241947A (en) * 2019-12-31 2020-06-05 深圳奇迹智慧网络有限公司 Training method and device of target detection model, storage medium and computer equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108399362B (en) * 2018-01-24 2022-01-07 中山大学 Rapid pedestrian detection method and device
CN110097051A (en) * 2019-04-04 2019-08-06 平安科技(深圳)有限公司 Image classification method, device and computer readable storage medium
CN112561865B (en) * 2020-12-04 2024-03-12 深圳格瑞健康科技有限公司 Method, system and storage medium for training detection model of constant molar position
CN112561864B (en) * 2020-12-04 2024-03-29 深圳格瑞健康科技有限公司 Training method, system and storage medium for caries image classification model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241947A (en) * 2019-12-31 2020-06-05 深圳奇迹智慧网络有限公司 Training method and device of target detection model, storage medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种新的多尺度深度学***;高赞;;光电子・激光(02);第102-108页 *

Also Published As

Publication number Publication date
CN113343853A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US20220218449A1 (en) Dental cad automation using deep learning
CN111563887B (en) Intelligent analysis method and device for oral cavity image
US20210264611A1 (en) Deep learning for tooth detection and evaluation
US10685259B2 (en) Method for analyzing an image of a dental arch
US11314983B2 (en) Method for analyzing an image of a dental arch
US20190282344A1 (en) Dental cad automation using deep learning
US11107218B2 (en) Method for analyzing an image of a dental arch
US9463081B2 (en) Intraoral video camera and display system
US20190223983A1 (en) Methods and systems for employing artificial intelligence in automated orthodontic diagnosis & treatment planning
WO2021047185A1 (en) Monitoring method and apparatus based on facial recognition, and storage medium and computer device
JP2018055470A (en) Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system
US20190026893A1 (en) Method for analyzing an image of a dental arch
CN112151167A (en) Intelligent screening method for six-age dental caries of children based on deep learning
CN114424246A (en) Method, system and computer-readable storage medium for registering intraoral measurements
CN109377441A (en) Tongue with privacy protection function is as acquisition method and system
Ding et al. Detection of dental caries in oral photographs taken by mobile phones based on the YOLOv3 algorithm
CN113343853B (en) Intelligent screening method and device for dental caries of children
CN112382384A (en) Training method and diagnosis system for Turner syndrome diagnosis model and related equipment
CN111062260A (en) Automatic generation method of facial cosmetic recommendation scheme
US20240144480A1 (en) Dental treatment video
CN107886568B (en) Method and system for reconstructing facial expression by using 3D Avatar
Cholan et al. The impetus of artificial intelligence on periodontal diagnosis: a brief synopsis
Bose et al. The scope of artificial intelligence in oral radiology-a review.
Carneiro Enhanced tooth segmentation algorithm for panoramic radiographs
KR102377629B1 (en) Artificial Intelligence Deep learning-based orthodontic diagnostic device for analyzing patient image and generating orthodontic diagnostic report and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 912, Shenzhen Research Institute, Chinese University of Hong Kong, 10 Yuexing 2nd Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Gree Health Technology Co.,Ltd.

Address before: 518054 805, block B, second unified building of Houhai neighborhood committee, No. 1, Huayou Huaming Road, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen Gree Health Management Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant