CN116993702A - Method and related device for realizing single frame data gum separation by deep learning - Google Patents

Method and related device for realizing single frame data gum separation by deep learning Download PDF

Info

Publication number
CN116993702A
CN116993702A CN202310998592.XA CN202310998592A CN116993702A CN 116993702 A CN116993702 A CN 116993702A CN 202310998592 A CN202310998592 A CN 202310998592A CN 116993702 A CN116993702 A CN 116993702A
Authority
CN
China
Prior art keywords
data set
frame
oral cavity
verification
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310998592.XA
Other languages
Chinese (zh)
Inventor
吴刚
陈冬灵
潘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Up3d Tech Co ltd
Original Assignee
Shenzhen Up3d Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Up3d Tech Co ltd filed Critical Shenzhen Up3d Tech Co ltd
Priority to CN202310998592.XA priority Critical patent/CN116993702A/en
Publication of CN116993702A publication Critical patent/CN116993702A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a related device for realizing single frame data gum separation by deep learning, wherein the method comprises the following steps: acquiring historical single-frame oral cavity image data to form an initial historical single-frame oral cavity image set; screening processing is carried out, and the obtained screening data set is proportionally divided into a training data set, a verification data set and a test data set; marking the tooth area, the gum area and the background area; inputting the marked oral cavity image data in the marked training data set into a constructed convolutional neural network model for training treatment, and performing verification and test treatment to obtain a converged convolutional neural network model; and inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation processing, and outputting a gum separation result of the real-time single-frame oral cavity image data. In the embodiment of the invention, the gingival separation of the single-frame oral cavity image can be realized, and the recognition accuracy of the gingival area is greatly improved.

Description

Method and related device for realizing single frame data gum separation by deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a related device for realizing single frame data gum separation by utilizing deep learning.
Background
With the development of the deep learning technology, neural networks for image segmentation in specific data sets by using a deep learning algorithm are endless, such as U-Net Convolutional Networks for Biomedical Image Segmentation and MICCAI 2015; pyramid Scene Parsing Network, CVPR 2017, etc.; these deep learning techniques are also often used in actual industrial production links; however, considering the use scenario and the different specific requirements in industrial production, a neural network is often required to be designed and built for the specific requirements to meet specific services; in order to solve the problem that the existing three-dimensional scanner is not ideal in tooth and gum region segmentation precision (especially the recognition of gum region is not large enough), the invention provides a neural network based on RGB (red, green and blue) images and depth images, which greatly improves the recognition precision of gum region, especially the recognition range of gum region.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method and a related device for realizing single frame data gingival separation by utilizing deep learning, which can realize gingival separation of single frame oral cavity images and greatly improve the recognition precision of gingival areas.
In order to solve the above technical problems, an embodiment of the present invention provides a method for implementing single frame data gum separation by deep learning, the method including:
acquiring historical single-frame oral image data based on a three-dimensional intraoral scanner to form an initial historical single-frame oral image set;
screening treatment is carried out in the initial historical single-frame oral cavity image set according to preset screening conditions, and the obtained screening data set is divided into a training data set, a verification data set and a test data set according to proportion;
carrying out tooth area, gum area and background area marking treatment on historical single-frame oral cavity image data in the training data set to obtain a marked training data set;
inputting the marked oral cavity image data in the marked training data set into a constructed convolutional neural network model for training treatment, and sequentially verifying and testing the trained convolutional neural network model by using a verification data set and a test data set to obtain a converged convolutional neural network model;
and acquiring real-time single-frame oral cavity image data which are acquired by the three-dimensional intraoral scanner in real time for the oral cavity of the user, inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation, and outputting a gum separation result of the real-time single-frame oral cavity image data.
Optionally, the screening processing is performed in the initial historical single-frame oral cavity image set according to a preset screening condition, including:
screening the initial historical single-frame oral cavity images according to different acquisition angles and different conditions in the oral cavity;
wherein the different conditions within the oral cavity include a missing tooth condition, a restorative tooth condition, a caries condition, and a smoke stained tooth condition; each different acquisition angle is a historical single-frame oral image covering the acquisition angle involved in the initial set of historical single-frame oral images.
Optionally, the dividing the obtained screening data set into the training data set, the verification data set and the test data set according to the proportion includes:
the obtained screening data set is divided into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, wherein the training data set accounts for 70%, the verification data set accounts for 20% and the test data set accounts for 10%.
Optionally, after the obtained screening data set is proportionally divided into the training data set, the verification data set and the test data set, the method further includes:
and carrying out data enhancement processing on the training data set, the verification data set and the test data set by utilizing a color enhancement and random clipping method respectively.
Optionally, the data enhancement processing is performed on the training data set, the verification data set and the test data set by using a method of color enhancement and random clipping, respectively, including:
converting the historical single-frame oral image data in the training data set, the verification data set and the test data set from RGB color space into HSV color space, and multiplying the HSV color space by random numbers with the Value range of [ -1,1] in a Hue channel, a Saturation channel and a Value channel respectively to form color-enhanced historical single-frame oral image data;
carrying out random clipping treatment on the historical single-frame oral cavity image data in the training data set, the verification data set and the test data set, and enabling the image length and width of the clipped historical single-frame oral cavity image data to be consistent with those of the image before clipping;
the method comprises the steps that the length and width of the image of the cut historical single-frame oral image data are consistent with the length and width of the image before cutting, and the cut area in the cut historical single-frame oral image data is supplemented by using gray values, so that the length and width of the image are consistent with the length and width of the image before cutting.
Optionally, the convolutional neural network model has a structure of merging an RGB branch network and a Depth branch network, wherein the RGB branch network is used for extracting RGB features in the image, and the Depth branch network is used for extracting Depth features in the image;
The Loss function of the convolutional neural network model in the training process is the average of cross entropy and dice_loss.
Optionally, the verifying and testing the trained convolutional neural network model by sequentially using the verification data set and the test data set to obtain a converged convolutional neural network model includes:
inputting the verification data set into the trained convolutional neural network model for verification processing, outputting a verification result, and performing verification error rate calculation processing based on the verification result to obtain a verification error rate;
judging whether the verification error rate is within a preset range, if so, inputting the test data set into a trained convolutional neural network model for test processing to obtain a test result;
if the verification error rate is not in the preset range, carrying out parameter correction on the trained convolutional neural network model by using a back propagation algorithm based on the verification error rate, and after correction, returning to the convolutional neural network model after parameter correction input of the training data set for training treatment until the verification error rate is in the preset range, and inputting the test data set into the trained convolutional neural network model for testing treatment to obtain a test result;
The indexes in the test result comprise a recovery parameter, a precision parameter and an f1-score parameter; and equalizing the recovery parameter and the precision parameter by using the f1-score parameter; the equalization method is as follows: f1_score=2 x recovery x precision/(recovery+precision).
In addition, the embodiment of the invention also provides a device for realizing single frame data gum separation by deep learning, which comprises the following steps:
the obtaining module is as follows: the method comprises the steps of acquiring historical single-frame oral image data obtained by collecting oral cavities based on a three-dimensional intraoral scanner, and forming an initial historical single-frame oral image set;
and a screening module: the method comprises the steps of carrying out screening treatment on an initial historical single-frame oral cavity image set according to preset screening conditions, and dividing an obtained screening data set into a training data set, a verification data set and a test data set according to proportion;
and a marking module: the method comprises the steps of marking a tooth area, a gum area and a background area on historical single-frame oral cavity image data in the training data set to obtain a marked training data set;
training module: the method comprises the steps of inputting marked oral cavity image data in marked training data sets into a constructed convolutional neural network model for training treatment, and sequentially verifying and testing the trained convolutional neural network model by using a verification data set and a test data set to obtain a converged convolutional neural network model;
A gum separation module: the three-dimensional intraoral scanner is used for acquiring real-time single-frame oral cavity image data of the oral cavity of a user in real time, inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation processing, and outputting a gum separation result of the real-time single-frame oral cavity image data.
In addition, the embodiment of the application also provides a background server, which comprises a processor and a memory, wherein the processor runs a computer program or code stored in the memory to realize the method for realizing the single frame data gum separation by using deep learning.
In addition, an embodiment of the present application further provides a computer readable storage medium storing a computer program or code, which when executed by a processor, implements the method for implementing single frame data gum separation using deep learning as described in any one of the above.
In the embodiment of the application, when the three-dimensional intraoral scanner splices multi-frame point cloud data into a complete point cloud model, redundant data of single-frame point cloud needs to be processed at the moment, namely, the single-frame data is subjected to gingival separation, so that part of needed data is reserved, and unnecessary data is deleted; by the method, the gingival separation of single frame data can be realized, the gingival edge of the single frame data is clear, and the gingival range required by the place can be searched; thereby, the recognition accuracy of the gum region can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for implementing single frame data gum separation using deep learning in an embodiment of the invention;
FIG. 2 is a schematic structural diagram of a single frame data gum separating apparatus using deep learning in an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a background server according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for implementing single frame data gum separation using deep learning according to an embodiment of the invention.
As shown in fig. 1, a single frame data gum separation method using deep learning, the method comprising:
s11: acquiring historical single-frame oral image data based on a three-dimensional intraoral scanner to form an initial historical single-frame oral image set;
in the implementation process of the invention, the initial historical single-frame oral image set is formed by historical single-frame oral image data acquired by a three-dimensional intraoral scanner for different users at different angles; because the data quality of the subsequent training is positively correlated with the final training result, the historical single-frame oral image data will be greater than 2w+; the historical single-frame oral image data can cover as many angles and poses as possible, and simultaneously cover various complex situations in the oral cavity, such as tooth deficiency, tooth restoration, decayed tooth, smoke tooth stain and the like.
S12: screening treatment is carried out in the initial historical single-frame oral cavity image set according to preset screening conditions, and the obtained screening data set is divided into a training data set, a verification data set and a test data set according to proportion;
In the implementation process of the invention, the screening treatment is carried out on the initial historical single-frame oral cavity image set according to preset screening conditions, and the method comprises the following steps: screening the initial historical single-frame oral cavity images according to different acquisition angles and different conditions in the oral cavity; wherein the different conditions within the oral cavity include a missing tooth condition, a restorative tooth condition, a caries condition, and a smoke stained tooth condition; each different acquisition angle is a historical single-frame oral image covering the acquisition angle involved in the initial set of historical single-frame oral images.
Further, the dividing the obtained screening data set into the training data set, the verification data set and the test data set according to the proportion includes: the obtained screening data set is divided into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, wherein the training data set accounts for 70%, the verification data set accounts for 20% and the test data set accounts for 10%.
Further, after the obtained screening data set is proportionally divided into the training data set, the verification data set and the test data set, the method further comprises: and carrying out data enhancement processing on the training data set, the verification data set and the test data set by utilizing a color enhancement and random clipping method respectively.
Further, the data enhancement processing of the training data set, the verification data set and the test data set by using the methods of color enhancement and random clipping respectively includes: converting the historical single-frame oral image data in the training data set, the verification data set and the test data set from RGB color space into HSV color space, and multiplying the HSV color space by random numbers with the Value range of [ -1,1] in a Hue channel, a Saturation channel and a Value channel respectively to form color-enhanced historical single-frame oral image data; carrying out random clipping treatment on the historical single-frame oral cavity image data in the training data set, the verification data set and the test data set, and enabling the image length and width of the clipped historical single-frame oral cavity image data to be consistent with those of the image before clipping; the method comprises the steps that the length and width of the image of the cut historical single-frame oral image data are consistent with the length and width of the image before cutting, and the cut area in the cut historical single-frame oral image data is supplemented by using gray values, so that the length and width of the image are consistent with the length and width of the image before cutting.
Specifically, screening treatment is carried out in an initial historical single-frame oral image set according to different acquisition angles and different conditions in the oral cavity; wherein, the different conditions in the oral cavity comprise tooth deficiency conditions, tooth restoration conditions, decayed tooth conditions, smoke stain tooth conditions and the like; each different acquisition angle is a historical single-frame oral image covering the acquisition angle involved in the initial historical single-frame oral image set; namely, the screening criteria are: the screening criteria were: the pictures need to be acquired from different angles, namely, the pictures need to cover all possible angles and positions as much as possible, and various complicated situations possibly occurring in the mouth need to be contained, namely: missing teeth, repairing teeth, decayed teeth, smoke stained teeth, etc.
Dividing the screening data set into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, wherein the training data set accounts for 70%, the verification data set accounts for 20% and the test data set accounts for 10%; in order to make the test result more objective, the intraoral data collected by different individuals are respectively selected during the training, verification and test set division, and the division ratio is 7 in consideration of the general condition of large-scale semantic segmentation data set division: 2:1.
after the division is completed, the training data set, the verification data set and the test data set are required to be respectively subjected to data enhancement processing, and the data enhancement processing is performed by adopting a color enhancement and random clipping method in the application.
When the color enhancement mode is used, historical single-frame oral image data in a training data set, a verification data set and a test data set are converted into an HSV color space from an RGB color space, and random numbers with the Value range of [ -1,1] are multiplied by a Hue channel, a Saturation channel and a Value brightness channel respectively to form the color enhancement historical single-frame oral image data; when the random clipping method is used, the historical single-frame oral cavity image data in the training data set, the verification data set and the test data set are subjected to random clipping treatment, and the image length and width of the clipped historical single-frame oral cavity image data are consistent with those of the image before clipping; the method comprises the steps that the length and width of the image of the cut historical single-frame oral image data are consistent with the length and width of the image before cutting, and the cut area in the cut historical single-frame oral image data is supplemented by using gray values, so that the length and width of the image are consistent with the length and width of the image before cutting; namely, assuming that the length and width of the original picture are H and W respectively, the length and width of the picture obtained by random cutting still need to be H and W; randomly intercepting a part with length and width of nH and nW from an original picture, wherein (nH < H and nW < W), and the rest part is supplemented with gray values.
S13: carrying out tooth area, gum area and background area marking treatment on historical single-frame oral cavity image data in the training data set to obtain a marked training data set;
in the specific implementation process of the invention, the historical single-frame oral image data in the training data set is marked with the tooth area, the gum area and the background area, namely, the labelme workpiece is used for marking, the tooth is marked with 1, the required gum area is marked with 2, and the background is marked with 0, so that the marked training data set can be obtained.
S14: inputting the marked oral cavity image data in the marked training data set into a constructed convolutional neural network model for training treatment, and sequentially verifying and testing the trained convolutional neural network model by using a verification data set and a test data set to obtain a converged convolutional neural network model;
in the implementation process of the invention, the structure of the convolutional neural network model is a model fusing an RGB branch network and a Depth branch network, wherein the RGB branch network is used for extracting RGB features in an image, and the Depth branch network is used for extracting Depth features in the image; the Loss function of the convolutional neural network model in the training process is the average of cross entropy and dice_loss.
Further, the verifying and testing the trained convolutional neural network model by sequentially using the verifying data set and the testing data set to obtain a converged convolutional neural network model, which comprises the following steps: inputting the verification data set into the trained convolutional neural network model for verification processing, outputting a verification result, and performing verification error rate calculation processing based on the verification result to obtain a verification error rate; judging whether the verification error rate is within a preset range, if so, inputting the test data set into a trained convolutional neural network model for test processing to obtain a test result; if the verification error rate is not in the preset range, carrying out parameter correction on the trained convolutional neural network model by using a back propagation algorithm based on the verification error rate, and after correction, returning to the convolutional neural network model after parameter correction input of the training data set for training treatment until the verification error rate is in the preset range, and inputting the test data set into the trained convolutional neural network model for testing treatment to obtain a test result; the indexes in the test result comprise a recovery parameter, a precision parameter and an f1-score parameter; and equalizing the recovery parameter and the precision parameter by using the f1-score parameter; the equalization method is as follows: f1_score=2 x recovery x precision/(recovery+precision).
Specifically, the structure of the constructed convolutional neural network model is formed by fusing models of an RGB branch network and a Depth branch network, wherein the RGB branch network is used for extracting RGB features in an image, and the Depth branch network is used for extracting Depth features in the image; the Loss function of the convolutional neural network model in the training process is the average of cross entropy and dice_loss; the module fusing the RGB branch and the Depth branch is Squeeze and Excitation module (cited J.Hu, et al., "squeize-and-excitation networks," in IEEE Conf.on Computer Vision and Pattern Recognition (CVPR), 2018, pp.7132-7141.), after which Pyramid Pooling Module (H.Zhao, et al., "Pyramid scene parsing network," in IEEE Conf.on Computer Vision and Pattern Recognition (CVPR), 2017, pp.2881-2890) is adopted to aggregate features of different scales; and 5. The Loss function adopted by training is the average of cross entropy and dice_loss.
After training is completed, inputting the verification data set into the trained convolutional neural network model for verification processing, outputting a verification result, and performing verification error rate calculation processing according to the verification result to obtain a verification error rate; judging whether the verification error rate is within a preset range, and if the verification error rate is within the preset range, inputting a test data set into the trained convolutional neural network model for test processing to obtain a test result; if the verification error rate is not in the preset range, carrying out parameter correction on the trained convolutional neural network model by using a back propagation algorithm according to the verification error rate, and after correction, returning to the convolutional neural network model after parameter correction input with the training data set for training treatment until the verification error rate is in the preset range, and inputting the test data set into the trained convolutional neural network model for testing treatment to obtain a test result; in the test, the index of the test is parameters such as recovery, precision, f1-score and the like; for measuring the final result; the method comprises the steps of determining whether a target (teeth and gums) is found or not, wherein the target is a proportion of a correct prediction type to all real total values, and determining whether the target is accurate or not, wherein the accuracy is a proportion of a correct prediction type to all real prediction types, and the accuracy is used for determining whether the found target is accurate or not, and because the accuracy and the accuracy cannot be improved at the same time, f1-score is used for balancing the accuracy and the accuracy, wherein f1_score=2 x accuracy/(accuracy+accuracy); in the application, all the test indexes reach more than 97% on the test set, so that the training of the converged convolutional neural network model for single frame data gingival separation has higher accuracy.
S15: and acquiring real-time single-frame oral cavity image data which are acquired by the three-dimensional intraoral scanner in real time for the oral cavity of the user, inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation, and outputting a gum separation result of the real-time single-frame oral cavity image data.
In the implementation process of the application, when the gingival separation of the single-frame oral cavity image data is needed, the real-time single-frame oral cavity image data which is acquired by the three-dimensional oral cavity scanner for the oral cavity of the user is needed to be obtained, and then the real-time single-frame oral cavity image data is input into the convergence convolutional neural network model for gingival separation processing, so that the gingival separation result of the real-time single-frame oral cavity image data can be output.
In the embodiment of the application, when the three-dimensional intraoral scanner splices multi-frame point cloud data into a complete point cloud model, redundant data of single-frame point cloud needs to be processed at the moment, namely, the single-frame data is subjected to gingival separation, so that part of needed data is reserved, and unnecessary data is deleted; by the method, the gingival separation of single frame data can be realized, the gingival edge of the single frame data is clear, and the gingival range required by the place can be searched; thereby, the recognition accuracy of the gum region can be greatly improved.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a single frame data gum separating device implemented by deep learning according to an embodiment of the invention.
As shown in fig. 2, a single frame data gum separation apparatus using deep learning, the apparatus comprising:
obtaining module 21: the method comprises the steps of acquiring historical single-frame oral image data obtained by collecting oral cavities based on a three-dimensional intraoral scanner, and forming an initial historical single-frame oral image set;
in the implementation process of the invention, the initial historical single-frame oral image set is formed by historical single-frame oral image data acquired by a three-dimensional intraoral scanner for different users at different angles; because the data quality of the subsequent training is positively correlated with the final training result, the historical single-frame oral image data will be greater than 2w+; the historical single-frame oral image data can cover as many angles and poses as possible, and simultaneously cover various complex situations in the oral cavity, such as tooth deficiency, tooth restoration, decayed tooth, smoke tooth stain and the like.
Screening module 22: the method comprises the steps of carrying out screening treatment on an initial historical single-frame oral cavity image set according to preset screening conditions, and dividing an obtained screening data set into a training data set, a verification data set and a test data set according to proportion;
In the implementation process of the invention, the screening treatment is carried out on the initial historical single-frame oral cavity image set according to preset screening conditions, and the method comprises the following steps: screening the initial historical single-frame oral cavity images according to different acquisition angles and different conditions in the oral cavity; wherein the different conditions within the oral cavity include a missing tooth condition, a restorative tooth condition, a caries condition, and a smoke stained tooth condition; each different acquisition angle is a historical single-frame oral image covering the acquisition angle involved in the initial set of historical single-frame oral images.
Further, the dividing the obtained screening data set into the training data set, the verification data set and the test data set according to the proportion includes: the obtained screening data set is divided into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, wherein the training data set accounts for 70%, the verification data set accounts for 20% and the test data set accounts for 10%.
Further, after the obtained screening data set is proportionally divided into the training data set, the verification data set and the test data set, the method further comprises: and carrying out data enhancement processing on the training data set, the verification data set and the test data set by utilizing a color enhancement and random clipping method respectively.
Further, the data enhancement processing of the training data set, the verification data set and the test data set by using the methods of color enhancement and random clipping respectively includes: converting the historical single-frame oral image data in the training data set, the verification data set and the test data set from RGB color space into HSV color space, and multiplying the HSV color space by random numbers with the Value range of [ -1,1] in a Hue channel, a Saturation channel and a Value channel respectively to form color-enhanced historical single-frame oral image data; carrying out random clipping treatment on the historical single-frame oral cavity image data in the training data set, the verification data set and the test data set, and enabling the image length and width of the clipped historical single-frame oral cavity image data to be consistent with those of the image before clipping; the method comprises the steps that the length and width of the image of the cut historical single-frame oral image data are consistent with the length and width of the image before cutting, and the cut area in the cut historical single-frame oral image data is supplemented by using gray values, so that the length and width of the image are consistent with the length and width of the image before cutting.
Specifically, screening treatment is carried out in an initial historical single-frame oral image set according to different acquisition angles and different conditions in the oral cavity; wherein, the different conditions in the oral cavity comprise tooth deficiency conditions, tooth restoration conditions, decayed tooth conditions, smoke stain tooth conditions and the like; each different acquisition angle is a historical single-frame oral image covering the acquisition angle involved in the initial historical single-frame oral image set; namely, the screening criteria are: the screening criteria were: the pictures need to be acquired from different angles, namely, the pictures need to cover all possible angles and positions as much as possible, and various complicated situations possibly occurring in the mouth need to be contained, namely: missing teeth, repairing teeth, decayed teeth, smoke stained teeth, etc.
Dividing the screening data set into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, wherein the training data set accounts for 70%, the verification data set accounts for 20% and the test data set accounts for 10%; in order to make the test result more objective, the intraoral data collected by different individuals are respectively selected during the training, verification and test set division, and the division ratio is 7 in consideration of the general condition of large-scale semantic segmentation data set division: 2:1.
after the division is completed, the training data set, the verification data set and the test data set are required to be respectively subjected to data enhancement processing, and the data enhancement processing is performed by adopting a color enhancement and random clipping method in the application.
When the color enhancement mode is used, historical single-frame oral image data in a training data set, a verification data set and a test data set are converted into an HSV color space from an RGB color space, and random numbers with the Value range of [ -1,1] are multiplied by a Hue channel, a Saturation channel and a Value brightness channel respectively to form the color enhancement historical single-frame oral image data; when the random clipping method is used, the historical single-frame oral cavity image data in the training data set, the verification data set and the test data set are subjected to random clipping treatment, and the image length and width of the clipped historical single-frame oral cavity image data are consistent with those of the image before clipping; the method comprises the steps that the length and width of the image of the cut historical single-frame oral image data are consistent with the length and width of the image before cutting, and the cut area in the cut historical single-frame oral image data is supplemented by using gray values, so that the length and width of the image are consistent with the length and width of the image before cutting; namely, assuming that the length and width of the original picture are H and W respectively, the length and width of the picture obtained by random cutting still need to be H and W; randomly intercepting a part with length and width of nH and nW from an original picture, wherein (nH < H and nW < W), and the rest part is supplemented with gray values.
Marking module 23: the method comprises the steps of marking a tooth area, a gum area and a background area on historical single-frame oral cavity image data in the training data set to obtain a marked training data set;
in the specific implementation process of the invention, the historical single-frame oral image data in the training data set is marked with the tooth area, the gum area and the background area, namely, the labelme workpiece is used for marking, the tooth is marked with 1, the required gum area is marked with 2, and the background is marked with 0, so that the marked training data set can be obtained.
Training module 24: the method comprises the steps of inputting marked oral cavity image data in marked training data sets into a constructed convolutional neural network model for training treatment, and sequentially verifying and testing the trained convolutional neural network model by using a verification data set and a test data set to obtain a converged convolutional neural network model;
in the implementation process of the invention, the structure of the convolutional neural network model is a model fusing an RGB branch network and a Depth branch network, wherein the RGB branch network is used for extracting RGB features in an image, and the Depth branch network is used for extracting Depth features in the image; the Loss function of the convolutional neural network model in the training process is the average of cross entropy and dice_loss.
Further, the verifying and testing the trained convolutional neural network model by sequentially using the verifying data set and the testing data set to obtain a converged convolutional neural network model, which comprises the following steps: inputting the verification data set into the trained convolutional neural network model for verification processing, outputting a verification result, and performing verification error rate calculation processing based on the verification result to obtain a verification error rate; judging whether the verification error rate is within a preset range, if so, inputting the test data set into a trained convolutional neural network model for test processing to obtain a test result; if the verification error rate is not in the preset range, carrying out parameter correction on the trained convolutional neural network model by using a back propagation algorithm based on the verification error rate, and after correction, returning to the convolutional neural network model after parameter correction input of the training data set for training treatment until the verification error rate is in the preset range, and inputting the test data set into the trained convolutional neural network model for testing treatment to obtain a test result; the indexes in the test result comprise a recovery parameter, a precision parameter and an f1-score parameter; and equalizing the recovery parameter and the precision parameter by using the f1-score parameter; the equalization method is as follows: f1_score=2 x recovery x precision/(recovery+precision).
Specifically, the structure of the constructed convolutional neural network model is formed by fusing models of an RGB branch network and a Depth branch network, wherein the RGB branch network is used for extracting RGB features in an image, and the Depth branch network is used for extracting Depth features in the image; the Loss function of the convolutional neural network model in the training process is the average of cross entropy and dice_loss; the module fusing the RGB branch and the Depth branch is Squeeze and Excitation module (cited J.Hu, et al., "squeize-and-excitation networks," in IEEE Conf.on Computer Vision and Pattern Recognition (CVPR), 2018, pp.7132-7141.), after which Pyramid Pooling Module (H.Zhao, et al., "Pyramid scene parsing network," in IEEE Conf.on Computer Vision and Pattern Recognition (CVPR), 2017, pp.2881-2890) is adopted to aggregate features of different scales; and 5. The Loss function adopted by training is the average of cross entropy and dice_loss.
After training is completed, inputting the verification data set into the trained convolutional neural network model for verification processing, outputting a verification result, and performing verification error rate calculation processing according to the verification result to obtain a verification error rate; judging whether the verification error rate is within a preset range, and if the verification error rate is within the preset range, inputting a test data set into the trained convolutional neural network model for test processing to obtain a test result; if the verification error rate is not in the preset range, carrying out parameter correction on the trained convolutional neural network model by using a back propagation algorithm according to the verification error rate, and after correction, returning to the convolutional neural network model after parameter correction input with the training data set for training treatment until the verification error rate is in the preset range, and inputting the test data set into the trained convolutional neural network model for testing treatment to obtain a test result; in the test, the index of the test is parameters such as recovery, precision, f1-score and the like; for measuring the final result; the method comprises the steps of determining whether a target (teeth and gums) is found or not, wherein the target is a proportion of a correct prediction type to all real total values, and determining whether the target is accurate or not, wherein the accuracy is a proportion of a correct prediction type to all real prediction types, and the accuracy is used for determining whether the found target is accurate or not, and because the accuracy and the accuracy cannot be improved at the same time, f1-score is used for balancing the accuracy and the accuracy, wherein f1_score=2 x accuracy/(accuracy+accuracy); in the application, all the test indexes reach more than 97% on the test set, so that the training of the converged convolutional neural network model for single frame data gingival separation has higher accuracy.
Gum separation module 25: the three-dimensional intraoral scanner is used for acquiring real-time single-frame oral cavity image data of the oral cavity of a user in real time, inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation processing, and outputting a gum separation result of the real-time single-frame oral cavity image data.
In the implementation process of the application, when the gingival separation of the single-frame oral cavity image data is needed, the real-time single-frame oral cavity image data which is acquired by the three-dimensional oral cavity scanner for the oral cavity of the user is needed to be obtained, and then the real-time single-frame oral cavity image data is input into the convergence convolutional neural network model for gingival separation processing, so that the gingival separation result of the real-time single-frame oral cavity image data can be output.
In the embodiment of the application, when the three-dimensional intraoral scanner splices multi-frame point cloud data into a complete point cloud model, redundant data of single-frame point cloud needs to be processed at the moment, namely, the single-frame data is subjected to gingival separation, so that part of needed data is reserved, and unnecessary data is deleted; by the method, the gingival separation of single frame data can be realized, the gingival edge of the single frame data is clear, and the gingival range required by the place can be searched; thereby, the recognition accuracy of the gum region can be greatly improved.
The embodiment of the invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for implementing the single frame data gum separation by deep learning in any of the above embodiments. Among them, the computer-readable storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random AcceSS Memory, random access memories), EPROMs (EraSable Programmable Read-Only memories), EEPROMs (Electrically EraSable ProgrammableRead-Only memories), flash memories, magnetic cards, or optical cards. That is, a storage device includes any medium that stores or transmits information in a form readable by a device (e.g., computer, cell phone), and may be read-only memory, magnetic or optical disk, etc.
The embodiment of the invention also provides a computer application program which runs on a computer and is used for executing the method for realizing the single frame data gum separation by using deep learning in any one of the embodiments.
In addition, fig. 3 is a schematic structural diagram of a background server according to an embodiment of the present invention.
The embodiment of the invention also provides a background server, as shown in fig. 3. The background server includes a processor 302, a memory 303, an input unit 304, a display unit 305, and the like. It will be appreciated by those skilled in the art that the device architecture shown in fig. 3 does not constitute a limitation of all devices, and may include more or fewer components than shown, or may combine certain components. The memory 303 may be used to store an application 301 and various functional modules, and the processor 302 runs the application 301 stored in the memory 303, thereby performing various functional applications of the device and data processing. The memory may be internal memory or external memory, or include both internal memory and external memory. The internal memory may include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), flash memory, or random access memory. The external memory may include a hard disk, floppy disk, ZIP disk, U-disk, tape, etc. The disclosed memory includes, but is not limited to, these types of memory. The memory disclosed herein is by way of example only and not by way of limitation.
The input unit 304 is used for receiving input of a signal and receiving keywords input by a user. The input unit 304 may include a touch panel and other input devices. The touch panel may collect touch operations on or near the user (e.g., the user's operation on or near the touch panel using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a preset program; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., play control keys, switch keys, etc.), a trackball, mouse, joystick, etc. The display unit 305 may be used to display information input by a user or information provided to the user and various menus of the terminal device. The display unit 305 may take the form of a liquid crystal display, an organic light emitting diode, or the like. The processor 302 is a control center of the terminal device, connects various parts of the entire device using various interfaces and lines, performs various functions and processes data by running or executing software programs and/or modules stored in the memory 303, and invoking data stored in the memory.
As one embodiment, the background server includes: the device comprises one or more processors 302, a memory 303 and one or more application programs 301, wherein the one or more application programs 301 are stored in the memory 303 and are configured to be executed by the one or more processors 302, and the one or more application programs 301 are configured to perform the method for realizing single frame data gum separation by deep learning in any of the above embodiments.
In the embodiment of the application, when the three-dimensional intraoral scanner splices multi-frame point cloud data into a complete point cloud model, redundant data of single-frame point cloud needs to be processed at the moment, namely, the single-frame data is subjected to gingival separation, so that part of needed data is reserved, and unnecessary data is deleted; by the method, the gingival separation of single frame data can be realized, the gingival edge of the single frame data is clear, and the gingival range required by the place can be searched; thereby, the recognition accuracy of the gum region can be greatly improved.
In addition, the foregoing describes in detail a method and related apparatus for implementing single frame data gum separation by deep learning according to the embodiments of the present application, and specific examples should be adopted herein to illustrate the principles and embodiments of the present application, where the foregoing examples are only for aiding in understanding the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A method for implementing single frame data gum separation using deep learning, the method comprising:
Acquiring historical single-frame oral image data based on a three-dimensional intraoral scanner to form an initial historical single-frame oral image set;
screening treatment is carried out in the initial historical single-frame oral cavity image set according to preset screening conditions, and the obtained screening data set is divided into a training data set, a verification data set and a test data set according to proportion;
carrying out tooth area, gum area and background area marking treatment on historical single-frame oral cavity image data in the training data set to obtain a marked training data set;
inputting the marked oral cavity image data in the marked training data set into a constructed convolutional neural network model for training treatment, and sequentially verifying and testing the trained convolutional neural network model by using a verification data set and a test data set to obtain a converged convolutional neural network model;
and acquiring real-time single-frame oral cavity image data which are acquired by the three-dimensional intraoral scanner in real time for the oral cavity of the user, inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation, and outputting a gum separation result of the real-time single-frame oral cavity image data.
2. The method for achieving single frame data gum separation using deep learning as claimed in claim 1, wherein said filtering in said initial historical single frame oral image set according to a preset filtering condition comprises:
screening the initial historical single-frame oral cavity images according to different acquisition angles and different conditions in the oral cavity;
wherein the different conditions within the oral cavity include a missing tooth condition, a restorative tooth condition, a caries condition, and a smoke stained tooth condition; each different acquisition angle is a historical single-frame oral image covering the acquisition angle involved in the initial set of historical single-frame oral images.
3. The method for achieving single frame data gum separation using deep learning as claimed in claim 1, wherein the dividing the obtained screening data set into training data set, validation data set and test data set in proportion comprises:
the obtained screening data set is divided into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, wherein the training data set accounts for 70%, the verification data set accounts for 20% and the test data set accounts for 10%.
4. The method for achieving single frame data gum separation using deep learning as claimed in claim 1, wherein after said dividing the obtained screening data set into the training data set, the verification data set and the test data set in proportion, further comprising:
And carrying out data enhancement processing on the training data set, the verification data set and the test data set by utilizing a color enhancement and random clipping method respectively.
5. The method for achieving single frame data gum separation using deep learning as claimed in claim 4, wherein said performing data enhancement processing on said training data set, said verification data set and said test data set using color enhancement and random clipping methods, respectively, comprises:
converting the historical single-frame oral image data in the training data set, the verification data set and the test data set from RGB color space into HSV color space, and multiplying the HSV color space by random numbers with the Value range of [ -1,1] in a Hue channel, a Saturation channel and a Value channel respectively to form color-enhanced historical single-frame oral image data;
carrying out random clipping treatment on the historical single-frame oral cavity image data in the training data set, the verification data set and the test data set, and enabling the image length and width of the clipped historical single-frame oral cavity image data to be consistent with those of the image before clipping;
the method comprises the steps that the length and width of the image of the cut historical single-frame oral image data are consistent with the length and width of the image before cutting, and the cut area in the cut historical single-frame oral image data is supplemented by using gray values, so that the length and width of the image are consistent with the length and width of the image before cutting.
6. The method for realizing single frame data gum separation by deep learning according to claim 1, wherein the convolutional neural network model is structured as a model fusing an RGB branch network and a Depth branch network, wherein the RGB branch network is used for extracting RGB features in an image, and the Depth branch network is used for extracting Depth features in the image;
the Loss function of the convolutional neural network model in the training process is the average of cross entropy and dice_loss.
7. The method for implementing single frame data gum separation using deep learning as claimed in claim 1, wherein said sequentially using the verification data set and the test data set performs verification and test processing on the trained convolutional neural network model to obtain a converged convolutional neural network model, comprising:
inputting the verification data set into the trained convolutional neural network model for verification processing, outputting a verification result, and performing verification error rate calculation processing based on the verification result to obtain a verification error rate;
judging whether the verification error rate is within a preset range, if so, inputting the test data set into a trained convolutional neural network model for test processing to obtain a test result;
If the verification error rate is not in the preset range, carrying out parameter correction on the trained convolutional neural network model by using a back propagation algorithm based on the verification error rate, and after correction, returning to the convolutional neural network model after parameter correction input of the training data set for training treatment until the verification error rate is in the preset range, and inputting the test data set into the trained convolutional neural network model for testing treatment to obtain a test result;
the indexes in the test result comprise a recovery parameter, a precision parameter and an f1-score parameter; and equalizing the recovery parameter and the precision parameter by using the f1-score parameter; the equalization method is as follows: f1_score=2 x recovery x precision/(recovery+precision).
8. A single frame data gum separation apparatus utilizing deep learning, the apparatus comprising:
the obtaining module is as follows: the method comprises the steps of acquiring historical single-frame oral image data obtained by collecting oral cavities based on a three-dimensional intraoral scanner, and forming an initial historical single-frame oral image set;
and a screening module: the method comprises the steps of carrying out screening treatment on an initial historical single-frame oral cavity image set according to preset screening conditions, and dividing an obtained screening data set into a training data set, a verification data set and a test data set according to proportion;
And a marking module: the method comprises the steps of marking a tooth area, a gum area and a background area on historical single-frame oral cavity image data in the training data set to obtain a marked training data set;
training module: the method comprises the steps of inputting marked oral cavity image data in marked training data sets into a constructed convolutional neural network model for training treatment, and sequentially verifying and testing the trained convolutional neural network model by using a verification data set and a test data set to obtain a converged convolutional neural network model;
a gum separation module: the three-dimensional intraoral scanner is used for acquiring real-time single-frame oral cavity image data of the oral cavity of a user in real time, inputting the real-time single-frame oral cavity image data into a converged convolutional neural network model for gum separation processing, and outputting a gum separation result of the real-time single-frame oral cavity image data.
9. A background server comprising a processor and a memory, wherein the processor runs a computer program or code stored in the memory to implement the method of implementing single frame data gum separation using deep learning as claimed in any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program or code which, when executed by a processor, implements the method for implementing single frame data gum separation using deep learning as claimed in any one of claims 1 to 7.
CN202310998592.XA 2023-08-09 2023-08-09 Method and related device for realizing single frame data gum separation by deep learning Pending CN116993702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310998592.XA CN116993702A (en) 2023-08-09 2023-08-09 Method and related device for realizing single frame data gum separation by deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310998592.XA CN116993702A (en) 2023-08-09 2023-08-09 Method and related device for realizing single frame data gum separation by deep learning

Publications (1)

Publication Number Publication Date
CN116993702A true CN116993702A (en) 2023-11-03

Family

ID=88524551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310998592.XA Pending CN116993702A (en) 2023-08-09 2023-08-09 Method and related device for realizing single frame data gum separation by deep learning

Country Status (1)

Country Link
CN (1) CN116993702A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628222A (en) * 2021-08-05 2021-11-09 杭州隐捷适生物科技有限公司 3D tooth segmentation and classification method based on deep learning
CN114004970A (en) * 2021-11-09 2022-02-01 粟海信息科技(苏州)有限公司 Tooth area detection method, device, equipment and storage medium
KR20220017672A (en) * 2020-08-05 2022-02-14 주식회사 라온메디 Apparatus and method for tooth segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220017672A (en) * 2020-08-05 2022-02-14 주식회사 라온메디 Apparatus and method for tooth segmentation
CN113628222A (en) * 2021-08-05 2021-11-09 杭州隐捷适生物科技有限公司 3D tooth segmentation and classification method based on deep learning
CN114004970A (en) * 2021-11-09 2022-02-01 粟海信息科技(苏州)有限公司 Tooth area detection method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张旭东 等: "基于双金字塔特征融合网络的RGB-D多类实例分割", 《控制与决策》, vol. 35, no. 7, 31 July 2020 (2020-07-31), pages 1 *
钟侠骄 等: "基于RandLA-Net的3D点云牙颌分割与身份识别", 《计算机应用》, vol. 43, no. 1, 30 June 2023 (2023-06-30), pages 1 - 3 *

Similar Documents

Publication Publication Date Title
US11810271B2 (en) Domain specific image quality assessment
US11291532B2 (en) Dental CAD automation using deep learning
CN109871845B (en) Certificate image extraction method and terminal equipment
CN108921942B (en) Method and device for 2D (two-dimensional) conversion of image into 3D (three-dimensional)
CN108198177A (en) Image acquiring method, device, terminal and storage medium
US20210097730A1 (en) Face Image Generation With Pose And Expression Control
CN110310247A (en) Image processing method, device, terminal and computer readable storage medium
CN110400254A (en) A kind of lipstick examination cosmetic method and device
CN112836625A (en) Face living body detection method and device and electronic equipment
CN108665475A (en) Neural metwork training, image processing method, device, storage medium and electronic equipment
CN110991611A (en) Full convolution neural network based on image segmentation
CN111079507A (en) Behavior recognition method and device, computer device and readable storage medium
CN110276831A (en) Constructing method and device, equipment, the computer readable storage medium of threedimensional model
CN110782448A (en) Rendered image evaluation method and device
CN109523558A (en) A kind of portrait dividing method and system
CN115471886A (en) Digital person generation method and system
CN115270184A (en) Video desensitization method, vehicle video desensitization method and vehicle-mounted processing system
CN109815100B (en) Behavior monitoring method for CABAO software by utilizing image contrast analysis
CN116993702A (en) Method and related device for realizing single frame data gum separation by deep learning
CN108665455B (en) Method and device for evaluating image significance prediction result
CN114187668B (en) Face silence living body detection method and device based on positive sample training
CN111079528A (en) Primitive drawing checking method and system based on deep learning
CN109978832A (en) A kind of twisted pair stranding distance detection method based on edge reconstruction
CN104700416A (en) Image segmentation threshold determination method based on visual analysis
CN112561865B (en) Method, system and storage medium for training detection model of constant molar position

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination