CN115661531A - Image-text-based early oral cancer identification method, device, equipment and storage medium - Google Patents

Image-text-based early oral cancer identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN115661531A
CN115661531A CN202211342510.8A CN202211342510A CN115661531A CN 115661531 A CN115661531 A CN 115661531A CN 202211342510 A CN202211342510 A CN 202211342510A CN 115661531 A CN115661531 A CN 115661531A
Authority
CN
China
Prior art keywords
oral
lesion
texture
image
oral cancer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211342510.8A
Other languages
Chinese (zh)
Inventor
龙瀛
张海林
周波
易亮
谭浩蕾
周晓
张柏城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Cancer Hospital
Original Assignee
Hunan Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Cancer Hospital filed Critical Hunan Cancer Hospital
Priority to CN202211342510.8A priority Critical patent/CN115661531A/en
Publication of CN115661531A publication Critical patent/CN115661531A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention relates to an artificial intelligence technology, and discloses an early oral cancer identification method, device, equipment and storage medium based on pictures and texts. The method comprises the following steps: recognizing an oral cavity picture set of a user by utilizing a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set; when the lesion type set is not an empty set, generating an inquiry report corresponding to the lesion type set, and acquiring a symptom vocabulary entry of a user according to the inquiry report to carry out quantitative coding so as to obtain a text state information set; carrying out tissue texture feature recognition on the oral cavity picture set by utilizing a texture recognition network of the oral cavity cancer recognition model to obtain a texture feature set; and carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-link network of the oral cancer identification model to obtain an oral cancer identification result. The invention can diagnose the oral cancer by identifying the pictures and texts.

Description

Image-text-based early oral cancer identification method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an early oral cancer identification method, device and equipment based on pictures and texts and a computer readable storage medium.
Background
Oral cancer is a general term for malignant tumors occurring in the oral cavity, and most of them belong to squamous cell carcinoma. The occurrence of oral cancer can be promoted by the long-term abrasion and stimulation of foreign matters such as tobacco and wine addiction, poor oral hygiene, improper false teeth and the like.
Oral cancer has a very high cure rate in its early stages, but since its early symptoms are not obvious and are often overlooked by humans, it has already progressed to a later stage as soon as a greater character is found. Therefore, the method has important significance for improving the attention degree of oral cancer and early screening of the oral cancer.
Disclosure of Invention
The invention provides an image-text-based early oral cancer identification method, device, equipment and storage medium, and mainly aims to identify oral cavity micro-diseases in an image-text identification mode to achieve the effect of discovering oral cancer as early as possible.
In order to achieve the above object, the present invention provides a method for identifying early oral cancer based on graphics, comprising:
acquiring an oral cavity picture set, and carrying out lesion feature recognition on the oral cavity picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set;
judging whether the type of the lesion set is an empty set;
when the lesion type set is an empty set, generating pre-constructed oral health prompt information;
when the lesion type set is not an empty set, generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set, acquiring a symptom entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom entry to obtain a text symptom information set;
performing tissue image cutting operation on the oral cavity picture set by using a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing tissue texture characteristics of each tissue area image to obtain a texture characteristic set;
and carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer identification model to obtain an oral cancer identification result.
Optionally, the identifying tissue texture features of the tissue region images to obtain a texture feature set includes:
according to a preset cleaning strategy, carrying out formatting operation on the size of each tissue region image to obtain a formatted image set, carrying out Gaussian filtering processing on the formatted image set to obtain a noise reduction image set, carrying out contrast enhancement operation on the noise reduction image, and carrying out binarization operation on a contrast enhancement result to obtain a texture image set;
performing texture shape recognition on the texture image set by using the texture recognition network to obtain texture shape features, performing texture cutting operation on each tissue region image according to the texture shape features to obtain a tissue texture image set, and recognizing color features corresponding to each texture in the tissue texture image set;
and splicing the color features and the texture shape features according to the corresponding relation of each texture to obtain a texture feature set.
Optionally, the generating an inquiry report according to the pre-constructed diagnosis question database and the lesion type set includes:
inquiring a pre-constructed diagnosis problem database by using the lesion type set to obtain a diagnosis problem set;
automatically generating a report form for the set of the diagnosis problems by using a preset template to obtain a report form document;
and connecting the report document with the full connection layer of the oral cancer identification model by using a pre-constructed data coding quantization channel, and visually outputting the connected report document to obtain an inquiry report.
Optionally, the performing, by using a full-link layer network of the oral cancer recognition model, oral cancer classification recognition operation on the lesion type set, the texture feature set, and the text symptom information set to obtain an oral cancer recognition result, includes:
carrying out symptom feature recognition on the lesion type set, the texture feature set and the text symptom information set by using a full-link layer network of the oral cancer recognition model to obtain an oral symptom set;
and calculating the incidence probability of the oral cancer according to the oral symptom set by using a naive Bayesian classification algorithm, and inquiring a preset alarm interval corresponding to the incidence probability to obtain an oral cancer identification result.
Optionally, the performing, by using the texture recognition network of the oral cancer recognition model, a tissue image cutting operation on the oral cavity image set to obtain images of various tissue regions includes:
carrying out target tissue identification operation on the oral cavity picture set by utilizing a texture identification network of the oral cavity cancer identification model, and intercepting each target tissue by utilizing an intercepting function to obtain a target tissue image;
and intercepting and marking images in the preset range interval of the surface and the periphery of the target tissue according to the physiological structure relationship to obtain images of all tissue areas.
Optionally, before the lesion feature recognition is performed on the oral image set by using the lesion recognition network of the pre-trained oral cancer recognition model, the method further includes:
acquiring an oral cancer identification model comprising a lesion identification network, a texture identification network and a full connection layer, and a pre-constructed oral image sample set;
dividing the oral cavity image sample set into a training set and a testing set according to a preset dividing strategy;
performing multi-classification task loss configuration on a lesion recognition network and a texture recognition network of the oral cancer recognition model by using a cross entropy loss function according to a preset auxiliary training strategy;
sequentially extracting a group of oral cavity image samples from the training set and introducing the oral cavity image samples into the oral cavity cancer recognition model to obtain an oral cavity disease prediction result corresponding to the oral cavity image samples;
calculating a loss value between a real label of the oral cavity image sample and the oral cavity disease prediction result by utilizing a two-classification cross entropy loss function;
minimizing the loss value to obtain a model parameter when the loss value is minimum, and reversely updating the internal parameter of the oral cancer identification model by using the model parameter to obtain an updated oral cancer identification model;
judging whether all the oral cavity image samples in the training set execute training;
when the oral cavity image samples in the training set are not completely trained, the steps of sequentially extracting a group of oral cavity image samples from the training set and introducing the oral cavity image samples into the oral cavity cancer identification model are executed, and iterative optimization is carried out on the updated oral cavity cancer identification model;
when all oral cavity image samples in the training set are trained, obtaining a final optimized updated oral cavity cancer identification model;
calculating an accuracy of the updated oral cancer identification model using the test set;
judging whether the accuracy is smaller than a preset qualified threshold value;
when the accuracy is smaller than the qualified threshold, returning to the step of dividing the oral cavity image sample set into a training set and a testing set according to a preset dividing strategy, and acquiring the training set and the testing set again to train the updated oral cavity cancer recognition model;
and when the accuracy is greater than or equal to the qualified threshold, taking the finally optimized updated oral cancer recognition model as a trained oral cancer recognition model.
Optionally, the performing lesion feature recognition on the oral image set by using a lesion recognition network of the pre-trained oral cancer recognition model to obtain a lesion type set includes:
carrying out multilayer convolution operation on the oral cavity picture set by utilizing a lesion recognition network of a pre-trained oral cancer recognition model to obtain a convolution matrix set, and carrying out average pooling operation on the convolution matrix set to obtain a pooling matrix set;
flattening each pooling matrix in the pooling matrix set to obtain an image characteristic sequence set;
and carrying out lesion feature identification operation on the image feature sequence set by utilizing a sub full connection layer in the lesion identification network to obtain a lesion type set.
In order to solve the above problems, the present invention also provides a graph-text based early oral cancer identification device, comprising:
the system comprises a lesion monitoring module, a lesion recognition module and a pre-established oral health prompt message generation module, wherein the lesion monitoring module is used for acquiring an oral picture set, performing lesion feature recognition on the oral picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set, judging whether the type of the lesion set is an empty set or not, and generating the pre-established oral health prompt message when the type of the lesion set is the empty set;
the inquiry symptom acquisition module is used for generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set when the lesion type set is not an empty set, acquiring a symptom vocabulary entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom vocabulary entry to obtain a text symptom information set;
the oral cavity texture feature recognition module is used for performing tissue image cutting operation on the oral cavity picture set by utilizing a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing the tissue texture features of each tissue area image to obtain a texture feature set;
and the oral cancer identification module is used for carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection-layer network of the oral cancer identification model to obtain an oral cancer identification result.
In order to solve the above problem, the present invention also provides an electronic device, including:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the method for teletext based early oral cancer identification described above.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one computer program is stored, the at least one computer program being executed by a processor in an electronic device to implement the above-mentioned image-text based early oral cancer identification method.
The embodiment of the invention generates an inquiry report according with the lesion type set by identifying the lesion type set in the oral picture set, thereby acquiring a text symptom information set of a user outside an image; and then, identifying a texture feature set in the oral cavity through a texture identification network in the model, wherein in the embodiment of the invention, at the early stage of oral cancer onset, the lesion features are few and unobvious, and the analysis of the texture features of the oral mucosa has an important reference function, and then carrying out oral cancer classification identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection-layer network of the oral cancer identification model to obtain an oral cancer identification result, realize image-text feature mixed analysis and increase the oral cancer identification accuracy. Therefore, the image-text-based early oral cancer identification method, device, equipment and storage medium provided by the embodiment of the invention can identify the oral cavity micro-disease in an image-text identification mode, and achieve the effect of finding oral cancer as early as possible.
Drawings
Fig. 1 is a schematic flowchart of an early oral cancer identification method based on graphics context according to an embodiment of the present invention;
fig. 2 is a detailed flowchart illustrating a step of the image-text based early oral cancer identification method according to an embodiment of the present invention;
fig. 3 is a detailed flowchart illustrating a step of the image-text based early oral cancer identification method according to an embodiment of the present invention;
fig. 4 is a detailed flowchart illustrating a step of the image-text based early oral cancer identification method according to an embodiment of the present invention;
fig. 5 is a detailed flowchart illustrating a step of the image-text based early oral cancer identification method according to an embodiment of the present invention;
FIG. 6 is a functional block diagram of an early oral cancer identification device based on graphics and text according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device for implementing the image-text-based early oral cancer identification method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The embodiment of the application provides an early oral cancer identification method based on pictures and texts. In the embodiment of the present application, the execution subject of the graph-text based early oral cancer identification method includes, but is not limited to, at least one of an electronic device, such as a server, a terminal, and the like, which can be configured to execute the method provided in the embodiment of the present application. In other words, the graph-text based early oral cancer identification method may be executed by software or hardware installed in a terminal device or a server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Fig. 1 is a schematic flowchart of an early oral cancer identification method based on graphics context according to an embodiment of the present invention. In this embodiment, the method for identifying early oral cancer based on image text comprises:
s1, obtaining an oral cavity picture set, and carrying out lesion feature recognition on the oral cavity picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set.
In the embodiment of the invention, the oral cavity picture set can adopt the high-definition camera to photograph the oral cavity from different angles to obtain pictures at least comprising an upper picture, a lower picture, a left picture, a right picture and a tongue bottom.
Further, the oral cancer identification model comprises a lesion identification network, a two-classification neural network, a texture identification network and a full-connection layer network. The system comprises a lesion recognition network, a two-classification neural network and a full-connection-layer network, wherein the lesion recognition network is used for recognizing lesion parts in an oral cavity, such as ulcer erosion, wound bleeding, red and swollen tumors and other obvious characteristics, the two-classification neural network is used for judging whether lesions exist or not, if the generated character 0 does not exist, the character 1 is generated, when the character 1 appears, the network normally operates, and when the character 0 appears, the pre-constructed oral health prompt information is directly output by skipping a subsequent texture recognition network and the full-connection-layer network; the texture recognition model is used for recognizing fine features in the oral cavity, such as white spots, erythema, mucosal fibrosis, tongue fur features and the like; the full-connection layer network contains a feature recognition and naive Bayes classification algorithm, and is used for recognizing the names of various features and calculating the occurrence probability of oral cancer when various features occur.
In detail, in the embodiment of the present invention, the performing lesion feature recognition on the oral cavity image set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set includes:
carrying out multilayer convolution operation on the oral cavity picture set by utilizing a lesion recognition network of a pre-trained oral cancer recognition model to obtain a convolution matrix set, and carrying out average pooling operation on the convolution matrix set to obtain a pooling matrix set;
flattening each pooling matrix in the pooling matrix set to obtain an image characteristic sequence set;
and carrying out lesion feature identification operation on the image feature sequence set by utilizing a sub full connection layer in the lesion identification network to obtain a lesion type set.
In the embodiment of the invention, the convolution kernel in the lesion identification network is utilized to perform convolution operation on the oral cavity picture set to obtain a convolution matrix set, and then the average pooling operation and the flattening operation are performed on the convolution matrix set through a pooling layer and a flatten layer to obtain a feature sequence set, wherein the convolution kernel is used for extracting features, and the pooling layer and the flatten layer are used for reducing the dimension of the features under the condition of not influencing feature values. And then combining all lesion features through a sub-full connection layer in the lesion recognition network to obtain a combined feature set, performing multi-classification operation on all combined features, and outputting the lesion type with the highest score in a classification result to obtain a lesion type set.
S2, judging whether the type of the lesion set is an empty set;
and S3, generating pre-constructed oral health prompt information when the lesion type set is an empty set.
In the embodiment of the invention, when the oral cavity of the user has no lesion type, the oral cavity condition of the user is healthy, and the oral health prompt information is generated without subsequent complex identification operation.
And S4, when the lesion type set is not an empty set, generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set, acquiring a symptom vocabulary entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom vocabulary entry to obtain a text symptom information set.
It should be noted that the oral cavity belongs to a tissue directly contacting with the outside, and is easily damaged by various external stimuli, so that the oral cancer cannot be judged by direct image recognition, and information which cannot be obtained in an image, namely, inquiry information of a user, needs to be obtained.
The diagnosis problem database comprises oral symptom problems which cannot be intuitively acquired from images, such as the duration of ulcer, whether unexplained numbness, dryness heat and dryness sensation occur, whether speaking or swallowing is difficult, whether pain is obvious and the like.
In detail, referring to fig. 2, in the embodiment of the present invention, the generating an inquiry report according to the pre-constructed diagnosis question database and the lesion type set includes:
s41, inquiring a pre-constructed diagnosis problem database by using the lesion type set to obtain a diagnosis problem set;
s42, automatically generating a report form for the diagnosis problem set by using a preset template to obtain a report form document;
s43, connecting the report form document with a full connection layer of the oral cancer identification model by using a pre-constructed data coding quantization channel, and visually outputting the connected report form document to obtain an inquiry report form.
In the embodiment of the present invention, the diagnosis question database has many questions, which is inconvenient for all questions to be asked, and the inquiry needs to be made according to the scenes in the lesion type set, for example, if an ulcer occurs, the time of the ulcer can be inquired; if only swelling appears, it can be asked whether there is a significant pain or not, and the ulcer time is not required to be asked.
According to the embodiment of the invention, the diagnosis problem set can be obtained by inquiring the key words of the lesion type set, and then the diagnosis problems are directly and automatically generated into the report form through the preset VB template, wherein the report form needs to obtain the reply information of the user. And finally, outputting the report form to the user through a visualization tool to form an inquiry report which can be operated by the user.
And S5, performing tissue image cutting operation on the oral cavity picture set by using a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing tissue texture features of each tissue area image to obtain a texture feature set.
In an embodiment of the present invention, the tissue region image includes a tongue coating, a tongue bottom, a palate, inner walls of two cheeks of a mouth, and the like.
In detail, referring to fig. 3, in an embodiment of the present invention, the performing a tissue image cutting operation on the oral cavity image set by using the texture recognition network of the oral cancer recognition model to obtain each tissue region image includes:
s501, carrying out target tissue identification operation on the oral cavity picture set by using a texture identification network of the oral cavity cancer identification model, and intercepting each target tissue by using an intercepting function to obtain a target tissue image;
s502, according to the physiological structure relationship, images in the preset range interval of the surface and the periphery of the target tissue are intercepted and marked in the target tissue image, and images of all tissue areas are obtained.
In the embodiment of the invention, the tongue coating and tongue bottom images can be quickly identified, the characteristics of the two sides of the throat, the palate, the inner walls of the two cheeks of the oral cavity and the like are not easy to identify, and a reference object is needed for position determination, so that the embodiment of the invention firstly identifies the tissues of the teeth, the throat and the tongue in a tissue identification mode, and then obtains the images of the two sides of the throat, the palate, the inner walls of the two cheeks of the oral cavity and the like according to the physiological structure relationship to obtain the tissue region images.
The interception function is a function used for automatically framing the target tissue through a rectangular frame in the model.
Further, referring to fig. 4, in an embodiment of the present invention, the identifying tissue texture features of the images of the tissue regions to obtain a texture feature set includes:
s511, carrying out formatting operation on the size of each tissue region image according to a preset cleaning strategy to obtain a formatted image set, carrying out Gaussian filtering processing on the formatted image set to obtain a noise reduction image set, carrying out contrast enhancement operation on the noise reduction image, and carrying out binarization operation on a contrast enhancement result to obtain a texture image set;
s512, performing texture shape recognition on the texture image set by using the texture recognition network to obtain texture shape features, performing texture cutting operation on each tissue region image according to the texture shape features to obtain a tissue texture image set, and recognizing color features corresponding to each texture in the tissue texture image set;
s513, according to the corresponding relation of each texture, the color features and the texture shape features are spliced to obtain a texture feature set.
It should be understood that the texture features of each tissue region in the oral cavity image are not obvious, and therefore, in order to increase the identification accuracy, in the embodiment of the present invention, each tissue region image is uniformly divided into 200 × 200 pixels for identification, then noise reduction processing is performed on noise points in the image through gaussian filtering to obtain a noise reduction image, then, in order to increase the identification degree of the tissue texture, contrast enhancement is performed, and then binarization processing is performed to obtain a texture image set with clear stripes and gray scale.
Further, the embodiment of the invention performs feature extraction in a convolution pooling manner to obtain texture shape features, then cuts each tissue region image of the color picture through the cutting function, and queries color features of each texture feature, such as erythema, vitiligo, tongue fur color and the like, so as to increase the integrity of the texture features.
And S6, carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer identification model to obtain an oral cancer identification result.
In detail, in an embodiment of the present invention, the performing, by using a full-link layer network of the oral cancer identification model, an oral cancer classification and identification operation on the lesion type set, the texture feature set, and the text symptom information set to obtain an oral cancer identification result includes:
carrying out symptom feature recognition on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer recognition model to obtain an oral symptom set;
and calculating the incidence probability of the oral cancer according to the oral symptom set by using a naive Bayesian classification algorithm, and inquiring a preset alarm interval corresponding to the incidence probability to obtain an oral cancer identification result.
In the embodiment of the invention, the full-connection layer is utilized to carry out symptom feature recognition on the lesion type set, the texture feature set and the text symptom information set to obtain various oral symptoms, such as red inflammatory regions, white smooth scaly plaques, oral burning, dysphagia and the like.
And then calculating the probability of oral cancer occurrence when one or more symptoms of the various oral symptoms occur through a naive Bayes classification algorithm. The naive Bayes classification algorithm is a rapid classification algorithm, is usually suitable for a data set with very high dimensionality, and obtains the probability of occurrence of each oral symptom, the probability of occurrence of oral cancer when each oral symptom occurs and the like through sample statistics on the assumption that the probabilities of occurrence of ulcer, red swelling, oral burning, dysphagia and the like are independent and irrelevant, so that the incidence probability of oral cancer is calculated. Then, the oral cancer identification result is obtained through a preset alarm interval, such as 0-0.4 non-oral cancer identification result, 0.4-0.7 suspected oral cancer identification result and 0.7-1 confirmed oral cancer identification result.
Further, referring to fig. 5, in an embodiment of the present invention, before performing lesion feature identification on the oral image set by using the lesion recognition network of the pre-trained oral cancer recognition model, the method further includes:
s601, obtaining an oral cancer identification model containing a lesion identification network, a texture identification network and a full-connection layer network, and a pre-constructed oral image sample set;
s602, dividing the oral cavity image sample set into a training set and a testing set according to a preset dividing strategy;
s603, performing multi-classification task loss configuration on the lesion recognition network and the texture recognition network of the oral cancer recognition model by using a cross entropy loss function according to a preset auxiliary training strategy;
s604, sequentially extracting a group of oral cavity image samples from the training set and introducing the oral cavity image samples into the oral cavity cancer recognition model to obtain an oral cavity disease prediction result corresponding to the oral cavity image samples;
s605, calculating a loss value between a real label of the oral cavity image sample and the oral cavity disease prediction result by using a two-classification cross entropy loss function;
s606, minimizing the loss value to obtain a model parameter when the loss value is minimum, and reversely updating the internal parameter of the oral cancer identification model by using the model parameter to obtain an updated oral cancer identification model;
s607, judging whether all the oral cavity image samples in the training set execute training;
when the oral cavity image samples in the training set are not completely trained, the step S604 is executed, and iterative optimization is carried out on the updated oral cavity cancer identification model;
when all the oral cavity image samples in the training set are trained, S608, obtaining a finally optimized updated oral cavity cancer recognition model;
s609, calculating the accuracy of the updated oral cancer identification model by using the test set;
s6010, judging whether the accuracy is smaller than a preset qualified threshold value;
when the accuracy is smaller than the qualified threshold, returning to the step of S602, and acquiring the training set and the test set again to train the updated oral cancer identification model;
and when the accuracy is greater than or equal to the qualified threshold value, S6011, using the finally optimized updated oral cancer recognition model as a trained oral cancer recognition model.
The dividing strategy is that each oral cavity image sample is randomly divided into a training set and a testing set according to a preset proportion, wherein the training set comprises the following steps: test set =7:3.
in the embodiment of the invention, the overall recognition result of the model is controlled by adopting a binary cross entropy loss function, the model is trained by adopting a gradient descent method, the accuracy of the oral cancer recognition model is verified and updated by a test set after the training set completely participates in the training, and the test set and the training set are redistributed for re-training when the accuracy does not pass until the oral cancer recognition model with qualified accuracy is obtained.
In addition, in the training process, according to a preset auxiliary training strategy, the embodiment of the invention also configures multi-classification task loss. The auxiliary training is to add auxiliary loss (auxiliary loss) to the final layers of the lesion recognition network and the texture recognition network, and directly obtain output results of each layer of the lesion recognition network and the texture recognition network, and independently calculate a loss value, wherein cross entropy loss functions in the lesion recognition network and the texture recognition network are multi-classification loss functions. And the auxiliary loss process is only carried out in the training process and is not displayed in the result output process, and the final layer of the full connection layer outputs the result in the processing result. The embodiment of the invention increases auxiliary loss in the training process, can accelerate convergence speed, enhance supervision and enhance gradient back propagation, so that the oral cancer identification model has better training efficiency.
The embodiment of the invention generates an inquiry report according with the lesion type set by identifying the lesion type set in the oral picture set, thereby acquiring a text symptom information set of a user outside an image; and then identifying a texture feature set in the oral cavity through a texture identification network in the model, wherein in the embodiment of the invention, the lesion features are few and unobvious at the early stage of the oral cancer onset, the analysis of the texture features of the oral mucosa has an important reference function, and then the oral cancer classification identification operation is carried out on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-link network of the oral cancer identification model to obtain an oral cancer identification result, realize image-text feature mixed analysis and increase the oral cancer identification accuracy. Therefore, the image-text-based early oral cancer identification method provided by the embodiment of the invention can identify the oral tiny diseases in an image-text identification mode, and achieves the effect of finding oral cancer as early as possible.
Fig. 6 is a functional block diagram of an early oral cancer detection device based on graphics context according to an embodiment of the present invention.
The image-text based early oral cancer identification device 100 can be installed in an electronic device. According to the realized functions, the image-text based early oral cancer identification device 100 can comprise a lesion monitoring module 101, an inquiry symptom obtaining module 102, an oral texture feature identification module 103 and an oral cancer identification module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the lesion monitoring module 101 is configured to obtain an oral cavity image set, perform lesion feature recognition on the oral cavity image set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set, determine whether the type of the lesion set is an empty set, and generate pre-constructed oral health prompt information when the type of the lesion set is an empty set;
the inquiry symptom obtaining module 102 is configured to, when the lesion type set is not an empty set, generate an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set, obtain a symptom entry input by a user according to the inquiry report, and perform quantization coding on the symptom entry to obtain a text symptom information set;
the oral cavity texture feature recognition module 103 is configured to perform tissue image cutting operation on the oral cavity image set by using a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognize tissue texture features of each tissue area image to obtain a texture feature set;
the oral cancer identification module 104 is configured to perform oral cancer classification and identification operations on the lesion type set, the texture feature set, and the text symptom information set by using a full-link layer network of the oral cancer identification model, so as to obtain an oral cancer identification result.
In detail, when the modules in the image-text based early oral cancer identification device 100 according to the embodiment of the present application are used, the same technical means as the image-text based early oral cancer identification method described in fig. 1 to 5 are adopted, and the same technical effects can be produced, and the details are not repeated herein.
Fig. 7 is a schematic structural diagram of an electronic device 1 for implementing a graph-text based early oral cancer identification method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a teletext based early oral cancer identification program, stored in the memory 11 and executable on the processor 10.
In some embodiments, the processor 10 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same function or different functions, and includes one or more Central Processing Units (CPUs), a microprocessor, a digital Processing chip, a graphics processor, a combination of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device 1, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device by running or executing programs or modules (for example, executing a text-based early oral cancer identification program, etc.) stored in the memory 11 and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a removable hard disk of the electronic device. The memory 11 may also be an external storage device of the electronic device in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 can be used not only for storing application software installed in the electronic device and various data, such as codes of a teletext based early oral cancer identification program, etc., but also for temporarily storing data that has been output or will be output.
The communication bus 12 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
The communication interface 13 is used for communication between the electronic device 1 and other devices, and includes a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), which are typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable, among other things, for displaying information processed in the electronic device and for displaying a visualized user interface.
Fig. 7 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 7 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The memory 11 of the electronic device 1 stores a teletext-based early oral cancer identification program which is a combination of instructions that, when executed in the processor 10, enable:
acquiring an oral cavity picture set, and carrying out lesion feature recognition on the oral cavity picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set;
judging whether the type of the lesion set is an empty set;
when the lesion type set is an empty set, generating pre-constructed oral health prompt information;
when the lesion type set is not an empty set, generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set, acquiring a symptom entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom entry to obtain a text symptom information set;
performing tissue image cutting operation on the oral cavity picture set by using a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing tissue texture characteristics of each tissue area image to obtain a texture characteristic set;
and carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer identification model to obtain an oral cancer identification result.
Specifically, the specific implementation method of the instruction by the processor 10 may refer to the description of the relevant steps in the embodiment corresponding to the drawings, which is not described herein again.
Further, the integrated modules/units of the electronic device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring an oral cavity picture set, and carrying out lesion feature recognition on the oral cavity picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set;
judging whether the type of the lesion set is an empty set;
when the lesion type set is an empty set, generating pre-constructed oral health prompt information;
when the lesion type set is not an empty set, generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set, acquiring a symptom entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom entry to obtain a text symptom information set;
performing tissue image cutting operation on the oral cavity picture set by utilizing a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing tissue texture characteristics of each tissue area image to obtain a texture characteristic set;
and carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer identification model to obtain an oral cancer identification result.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the same, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for identifying early oral cancer based on graphics context, the method comprising:
acquiring an oral cavity picture set, and carrying out lesion feature recognition on the oral cavity picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set;
judging whether the type of the lesion set is an empty set;
when the lesion type set is an empty set, generating pre-constructed oral health prompt information;
when the lesion type set is not an empty set, generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set, acquiring a symptom entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom entry to obtain a text symptom information set;
performing tissue image cutting operation on the oral cavity picture set by utilizing a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing tissue texture characteristics of each tissue area image to obtain a texture characteristic set;
and carrying out oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer identification model to obtain an oral cancer identification result.
2. The method for identifying early oral cancer based on graph and text according to claim 1, wherein the identifying the tissue texture features of the images of the respective tissue regions to obtain a texture feature set comprises:
formatting the size of each tissue region image according to a preset cleaning strategy to obtain a formatted image set, performing Gaussian filtering processing on the formatted image set to obtain a noise reduction image set, performing contrast enhancement operation on the noise reduction image, and performing binarization operation on a contrast enhancement result to obtain a texture image set;
performing texture shape recognition on the texture image set by using the texture recognition network to obtain texture shape features, performing texture cutting operation on each tissue region image according to the texture shape features to obtain a tissue texture image set, and recognizing color features corresponding to each texture in the tissue texture image set;
and splicing the color features and the texture shape features according to the corresponding relation of each texture to obtain a texture feature set.
3. The method for identifying early oral cancer based on image-text according to claim 1, wherein the generating of an inquiry report according to the pre-constructed diagnosis question database and the lesion type set comprises:
inquiring a pre-constructed diagnosis problem database by using the lesion type set to obtain a diagnosis problem set;
utilizing a preset template to automatically generate a report of the diagnosis problem set to obtain a report document;
and connecting the report document with the full connection layer of the oral cancer identification model by using a pre-constructed data coding quantization channel, and visually outputting the connected report document to obtain an inquiry report.
4. The method for identifying oral cancer in early stage based on graphics and text according to claim 1, wherein the performing the oral cancer classification and identification operation on the lesion type set, the texture feature set and the text symptom information set by using a full-link network of the oral cancer identification model to obtain the oral cancer identification result comprises:
carrying out symptom feature recognition on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-connection layer network of the oral cancer recognition model to obtain an oral symptom set;
and calculating the incidence probability of the oral cancer according to the oral symptom set by using a naive Bayesian classification algorithm, and inquiring a preset alarm interval corresponding to the incidence probability to obtain an oral cancer identification result.
5. The method for identifying early oral cancer based on graph and text according to claim 1, wherein the performing a tissue image cutting operation on the oral cavity picture set by using the texture recognition network of the oral cancer recognition model to obtain each tissue region image comprises:
carrying out target tissue identification operation on the oral cavity picture set by utilizing a texture identification network of the oral cavity cancer identification model, and intercepting each target tissue by utilizing an intercepting function to obtain a target tissue image;
and intercepting and marking images in the preset range interval of the surface and the periphery of the target tissue according to the physiological structure relationship to obtain images of all tissue areas.
6. The method for identifying early oral cancer based on graph and text according to claim 1, wherein before the lesion feature identification of the oral picture set by using the lesion identification network of the pre-trained oral cancer identification model, the method further comprises:
acquiring an oral cancer identification model comprising a lesion identification network, a texture identification network and a full connection layer, and a pre-constructed oral image sample set;
dividing the oral cavity image sample set into a training set and a testing set according to a preset dividing strategy;
performing multi-classification task loss configuration on a lesion recognition network and a texture recognition network of the oral cancer recognition model by using a cross entropy loss function according to a preset auxiliary training strategy;
sequentially extracting a group of oral cavity image samples from the training set and introducing the oral cavity image samples into the oral cavity cancer recognition model to obtain an oral cavity disease prediction result corresponding to the oral cavity image samples;
calculating a loss value between a real label of the oral cavity image sample and the oral cavity disease prediction result by utilizing a two-classification cross entropy loss function;
minimizing the loss value to obtain a model parameter when the loss value is minimum, and reversely updating the internal parameter of the oral cancer identification model by using the model parameter to obtain an updated oral cancer identification model;
judging whether all the oral cavity image samples in the training set execute training;
when the oral cavity image samples in the training set are not completely trained, the steps of sequentially extracting a group of oral cavity image samples from the training set and introducing the oral cavity image samples into the oral cavity cancer identification model are executed, and iterative optimization is carried out on the updated oral cavity cancer identification model;
when all oral cavity image samples in the training set are trained, obtaining a final optimized updated oral cavity cancer identification model;
calculating an accuracy rate of the updated oral cancer identification model using the test set;
judging whether the accuracy is smaller than a preset qualified threshold value;
when the accuracy is smaller than the qualified threshold, returning to the step of dividing the oral cavity image sample set into a training set and a testing set according to a preset dividing strategy, and acquiring the training set and the testing set again to train the updated oral cavity cancer recognition model;
and when the accuracy is greater than or equal to the qualified threshold, taking the finally optimized updated oral cancer recognition model as a trained oral cancer recognition model.
7. The method for identifying early oral cancer based on graph and text according to claim 1, wherein the identifying the lesion feature of the oral picture set by using the lesion identification network of the pre-trained oral cancer identification model to obtain a lesion type set comprises:
carrying out multilayer convolution operation on the oral cavity picture set by utilizing a lesion recognition network of a pre-trained oral cancer recognition model to obtain a convolution matrix set, and carrying out average pooling operation on the convolution matrix set to obtain a pooling matrix set;
flattening each pooling matrix in the pooling matrix set to obtain an image characteristic sequence set;
and carrying out lesion feature identification operation on the image feature sequence set by utilizing a sub full connection layer in the lesion identification network to obtain a lesion type set.
8. A device for image-text based early oral cancer identification, the device comprising:
the system comprises a lesion monitoring module, a lesion recognition module and a pre-established oral health prompt message generation module, wherein the lesion monitoring module is used for acquiring an oral picture set, performing lesion feature recognition on the oral picture set by using a lesion recognition network of a pre-trained oral cancer recognition model to obtain a lesion type set, judging whether the type of the lesion set is an empty set or not, and generating the pre-established oral health prompt message when the type of the lesion set is the empty set;
the inquiry symptom obtaining module is used for generating an inquiry report according to a pre-constructed diagnosis problem database and the lesion type set when the lesion type set is not an empty set, obtaining a symptom entry input by a user according to the inquiry report, and carrying out quantitative coding on the symptom entry to obtain a text symptom information set;
the oral cavity texture feature recognition module is used for performing tissue image cutting operation on the oral cavity picture set by utilizing a texture recognition network of the oral cavity cancer recognition model to obtain each tissue area image, and recognizing the tissue texture features of each tissue area image to obtain a texture feature set;
and the oral cancer identification module is used for carrying out oral cancer classification identification operation on the lesion type set, the texture feature set and the text symptom information set by utilizing a full-link network of the oral cancer identification model to obtain an oral cancer identification result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform a method of teletext based early oral cancer identification according to any one of claims 1-7.
10. A computer-readable storage medium, storing a computer program, wherein the computer program, when executed by a processor, implements the method for early oral cancer identification based on graphics context according to any one of claims 1 to 7.
CN202211342510.8A 2022-10-31 2022-10-31 Image-text-based early oral cancer identification method, device, equipment and storage medium Withdrawn CN115661531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211342510.8A CN115661531A (en) 2022-10-31 2022-10-31 Image-text-based early oral cancer identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211342510.8A CN115661531A (en) 2022-10-31 2022-10-31 Image-text-based early oral cancer identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115661531A true CN115661531A (en) 2023-01-31

Family

ID=84992851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211342510.8A Withdrawn CN115661531A (en) 2022-10-31 2022-10-31 Image-text-based early oral cancer identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115661531A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311539A (en) * 2023-05-19 2023-06-23 亿慧云智能科技(深圳)股份有限公司 Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN116913519A (en) * 2023-07-24 2023-10-20 东莞莱姆森科技建材有限公司 Health monitoring method, device, equipment and storage medium based on intelligent mirror

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311539A (en) * 2023-05-19 2023-06-23 亿慧云智能科技(深圳)股份有限公司 Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN116311539B (en) * 2023-05-19 2023-07-28 亿慧云智能科技(深圳)股份有限公司 Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
CN116913519A (en) * 2023-07-24 2023-10-20 东莞莱姆森科技建材有限公司 Health monitoring method, device, equipment and storage medium based on intelligent mirror

Similar Documents

Publication Publication Date Title
US11935644B2 (en) Deep learning automated dermatopathology
CN115661531A (en) Image-text-based early oral cancer identification method, device, equipment and storage medium
CN113283446B (en) Method and device for identifying object in image, electronic equipment and storage medium
CN112446025A (en) Federal learning defense method and device, electronic equipment and storage medium
CN112465060A (en) Method and device for detecting target object in image, electronic equipment and readable storage medium
CN111932547B (en) Method and device for segmenting target object in image, electronic device and storage medium
CN111651440A (en) User information distinguishing method and device and computer readable storage medium
CN113159147A (en) Image identification method and device based on neural network and electronic equipment
CN111932562B (en) Image identification method and device based on CT sequence, electronic equipment and medium
CN111862096B (en) Image segmentation method and device, electronic equipment and storage medium
CN112465819A (en) Image abnormal area detection method and device, electronic equipment and storage medium
CN111695609A (en) Target damage degree determination method, target damage degree determination device, electronic device, and storage medium
CN111696663A (en) Disease risk analysis method and device, electronic equipment and computer storage medium
CN112137591A (en) Target object position detection method, device, equipment and medium based on video stream
CN114677650B (en) Intelligent analysis method and device for pedestrian illegal behaviors of subway passengers
CN112435755A (en) Disease analysis method, disease analysis device, electronic device, and storage medium
CN113707337A (en) Disease early warning method, device, equipment and storage medium based on multi-source data
CN112686232B (en) Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
CN112906671B (en) Method and device for identifying false face-examination picture, electronic equipment and storage medium
CN112580505B (en) Method and device for identifying network point switch door state, electronic equipment and storage medium
CN113920590A (en) Living body detection method, living body detection device, living body detection equipment and readable storage medium
CN113255456B (en) Inactive living body detection method, inactive living body detection device, electronic equipment and storage medium
CN114463685A (en) Behavior recognition method and device, electronic equipment and storage medium
CN112541436B (en) Concentration analysis method and device, electronic equipment and computer storage medium
CN113095284A (en) Face selection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20230131

WW01 Invention patent application withdrawn after publication