CN109002846B - Image recognition method, device and storage medium - Google Patents

Image recognition method, device and storage medium Download PDF

Info

Publication number
CN109002846B
CN109002846B CN201810724219.4A CN201810724219A CN109002846B CN 109002846 B CN109002846 B CN 109002846B CN 201810724219 A CN201810724219 A CN 201810724219A CN 109002846 B CN109002846 B CN 109002846B
Authority
CN
China
Prior art keywords
region
image
body tissue
type
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810724219.4A
Other languages
Chinese (zh)
Other versions
CN109002846A (en
Inventor
孙星
贾琼
伍健荣
彭湃
郭晓威
周旋
常佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Healthcare Shenzhen Co Ltd
Original Assignee
Tencent Healthcare Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Healthcare Shenzhen Co Ltd filed Critical Tencent Healthcare Shenzhen Co Ltd
Priority to CN201810724219.4A priority Critical patent/CN109002846B/en
Publication of CN109002846A publication Critical patent/CN109002846A/en
Application granted granted Critical
Publication of CN109002846B publication Critical patent/CN109002846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image identification method, an image identification device and a storage medium; the embodiment of the invention can collect a life body tissue image to be detected, then, a preset region detection model is adopted to detect key features of the life body tissue image, a preset region classification model is adopted to identify the type of at least one identification region obtained by detection, and then, the position and the type of the identification region are marked on the life body tissue image according to the identification result so as to be referred by medical personnel; the scheme can improve the precision and the recognition accuracy of the model and improve the recognition effect.

Description

Image recognition method, device and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image recognition method, an image recognition apparatus, and a storage medium.
Background
With the development of Artificial Intelligence (AI), AI is also becoming more widely used in the medical field. Taking identification of a discrimination region (also called a diagnosis region) such as a cervical transformation region type as an example, conventionally, after a large number of colposcopic cervical opening sample images are acquired, three types of labeling are generally performed on the sample images, such as labeling as transformation region i, transformation region ii, or transformation region iii (the transformation region i, the transformation region ii, and the transformation region iii are types of cervical transformation regions, and different types of cervical transformation regions have different reference values for precancerous and cancerous diagnosis), then, deep learning is performed based on the labeled sample images to establish a cervical transformation region type identification model, and thereafter, cervical transformation region type identification can be performed on the colposcopic cervical opening image to be identified by using the trained cervical transformation region type identification model.
In the process of research and practice of the prior art, the inventor of the present invention finds that in the prior art, because the division of the cervical transformation area is complex, it is difficult to comprehensively label the type of the cervical transformation area, which results in low accuracy of the trained model, and in the process of identification, because the colposcope cervical orifice image has too many interference areas, the identification accuracy is also low.
Disclosure of Invention
The embodiment of the invention provides an image identification method, an image identification device and a storage medium, which can improve the identification accuracy and the identification effect.
The embodiment of the invention provides an image identification method, which comprises the following steps:
collecting a to-be-detected living body tissue image;
performing key feature detection on the life body tissue image by adopting a preset region detection model to obtain at least one distinguishing region, wherein the region detection model is formed by training a plurality of life body tissue sample images marked with key features;
identifying the type of the distinguishing region by adopting a preset region classification model, wherein the preset region classification model is formed by training a plurality of region sample images marked with region type characteristics;
and marking the position and the type of the identification area on the living body tissue image according to the identification result.
Correspondingly, the embodiment of the invention also provides an image recognition device, which comprises an acquisition unit, a detection unit, a recognition unit and a labeling unit, and the image recognition device comprises the following components:
the acquisition unit is used for acquiring a tissue image of a to-be-detected living body;
the detection unit is used for detecting key features of the life tissue image by adopting a preset region detection model to obtain at least one distinguishing region, and the region detection model is formed by training a plurality of life tissue sample images marked with the key features;
the identification unit is used for identifying the type of the identification region by adopting a preset region classification model, and the preset region classification model is formed by training a plurality of region sample images marked with region type characteristics;
and the marking unit is used for marking the position and the type of the identification area on the living body tissue image according to the identification result.
Optionally, in some embodiments, the labeling unit includes an obtaining subunit and a labeling subunit, as follows:
an acquisition subunit operable to determine a type of the discrimination region from the recognition result, and acquire coordinates of the discrimination region;
and the labeling subunit is used for labeling the position of the identification area on the living body tissue image according to the coordinates and labeling the type of the identification area on the position.
Optionally, in some embodiments, the obtaining subunit is specifically configured to determine, according to the recognition result, the type and the confidence level of the type of each recognition frame in a preset range in the recognition region, calculate, by using a non-maximum suppression algorithm, the confidence level of the type of each recognition frame in the preset range to obtain the confidence level of the preset range, select the type of the preset range with the highest confidence level as the type of the recognition region, and obtain the coordinates of the recognition region.
Optionally, in some embodiments, the image recognition apparatus may further include a preprocessing unit, as follows:
the preprocessing unit is used for preprocessing the living body tissue image according to a prediction strategy, and the preprocessing comprises image size scaling, color channel sequence adjustment, pixel adjustment, image normalization and/or image data arrangement adjustment;
the detection unit is specifically used for detecting key features of the preprocessed living body tissue image by adopting a preset region detection model.
Optionally, in some embodiments, the image recognition apparatus may further include a first training unit, as follows:
the acquisition unit can also be used for acquiring a plurality of life body tissue sample images marked with key features;
the first training unit may be configured to train a preset target detection model according to the living body tissue sample image to obtain an area detection model.
Optionally, in some embodiments, the acquisition unit may be specifically configured to acquire a plurality of images of the living body tissue sample, and label the acquired plurality of images of the living body tissue sample by using a neighborhood local typical region labeling method to obtain a plurality of images of the living body tissue sample with key features labeled.
Optionally, in some embodiments, the first training unit may be specifically configured to determine, from the collected multiple images of the living body tissue sample, an image of the living body tissue sample that needs to be currently trained, and obtain a current image of the living body tissue sample; importing the current life body tissue sample image into a preset target detection model for training to obtain a region predicted value corresponding to the current life body tissue sample image; converging the area predicted value corresponding to the current life tissue sample image and the labeled key feature of the current life tissue sample image so as to adjust the parameters of the target detection model; and returning to the step of determining the current life body tissue sample image needing to be trained in the collected multiple life body tissue sample images until all the multiple life body tissue sample images are trained.
Optionally, in some embodiments, the target detection model includes a depth residual error network and a region recommendation network, and the first training unit may be specifically configured to introduce the current living body tissue sample image into a preset depth residual error network for calculation, so as to obtain an output feature corresponding to the current living body tissue sample image; and importing the output characteristics into a regional recommendation network for detection to obtain a regional prediction value corresponding to the current life tissue sample image.
Optionally, in some embodiments, the image recognition apparatus may further include a second training unit, as follows:
the acquisition unit can be further used for acquiring a plurality of area sample images marked with area type characteristics;
the second training unit may be configured to train a preset classification model according to the region sample image to obtain a region classification model.
Optionally, in some embodiments, the acquiring unit may be specifically configured to acquire a plurality of life tissue sample images labeled with key features, intercept a discrimination area from the life tissue sample image according to the labeling to obtain a discrimination area sample, and perform area type feature labeling on the discrimination area sample to obtain an area sample image.
Optionally, in some embodiments, the acquisition unit may be specifically configured to acquire a plurality of images of a living body tissue sample, perform key feature detection on the images of the living body tissue sample by using a preset region detection model to obtain at least one distinguishing region sample, and perform region type feature labeling on the distinguishing region sample to obtain a region sample image.
In addition, the embodiment of the present invention further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in any one of the image recognition methods provided by the embodiments of the present invention.
The embodiment of the invention can collect a life body tissue image to be detected, then, a preset region detection model is adopted to detect key features of the life body tissue image, a preset region classification model is adopted to identify the type of at least one identification region obtained by detection, and then, the position and the type of the identification region are marked on the life body tissue image according to the identification result so as to be referred by medical personnel; according to the scheme, the identification region can be accurately marked out by using the trained region detection model, and the type of the identification region is identified in a targeted manner through the region classification model, so that the interference of other regions (namely non-identification regions) on type identification can be avoided, and the identification accuracy is improved; in addition, the region detection model is trained by a plurality of life body tissue sample images marked with key features without overall marking, so that compared with the existing scheme, the difficulty of marking is greatly reduced, the marking accuracy is improved, and the precision of the trained model is further improved; in a word, the scheme can greatly improve the accuracy and the recognition accuracy of the model and improve the recognition effect.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
Fig. 1a is a schematic scene diagram of an image recognition method according to an embodiment of the present invention;
FIG. 1b is a flowchart of an image recognition method according to an embodiment of the present invention;
FIG. 2a is another flowchart of an image recognition method according to an embodiment of the present invention;
fig. 2b is an architectural diagram of image recognition of cervical transformation zone type provided by an embodiment of the present invention;
FIG. 3a is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of another structure of an image recognition apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a network device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides an image identification method, an image identification device and a storage medium.
The biopsy region prediction apparatus may be specifically integrated in a network device, and the network device may be a terminal or a server.
For example, as shown in fig. 1a, the network device may acquire a living body tissue image to be detected, for example, specifically may receive some living body tissue images transmitted by an image acquisition device, such as a colposcope or an endoscope, such as a colposcope cervical image or an endoscope image, then perform key feature detection on the living body tissue image by using a preset region detection model to obtain at least one identification region, then recognize types of the identification regions, such as a cervical transformation region i type, a cervical transformation region ii type, a cervical transformation region iii type, and the like, by using a preset region classification model, and mark positions and types of the identification regions on the living body tissue image according to the recognition result. Thereafter, the image of the living tissue with the location and type of the identified region labeled can be provided to a medical practitioner for reference for further examination and diagnosis.
The region detection model is formed by training a plurality of life body tissue sample images marked with key features, and the preset region classification model is formed by training a plurality of region sample images marked with region type features.
The following are detailed descriptions. The numbers in the following examples are not intended to limit the order of preference of the examples.
The first embodiment,
The embodiment will be described from the perspective of an image recognition apparatus, which may be specifically integrated in a network device, which may be a terminal or a server, where the terminal may include a tablet Computer, a notebook Computer, a Personal Computer (PC), or the like.
The embodiment of the invention provides an image identification method, which comprises the following steps: the method comprises the steps of collecting a life body tissue image to be detected, detecting key features of the life body tissue image by adopting a preset region detection model to obtain at least one distinguishing region, identifying the type of the distinguishing region by adopting a preset region classification model, and marking the position and the type of the distinguishing region on the life body tissue image according to an identification result.
As shown in fig. 1b, the specific flow of the image recognition method may be as follows:
101. and collecting a tissue image of the living body to be detected.
For example, an image of a living tissue to be detected sent by an image capturing device may be specifically received, where the image capturing device may include a medical detection device, such as a colposcope or an endoscope, or may also include a medical monitoring device, and the like.
The living body tissue image to be detected means a living body tissue image to be detected, and the living body tissue image refers to an image of a certain component of a living body (an independent individual with a living form is a living body and can correspondingly reflect external stimulation), such as an image of intestines and stomach, a heart, a throat, a vagina and the like of a human body, and an image of intestines and stomach, even an oral cavity or skin and the like of a dog.
102. And detecting key features of the living body tissue image by adopting a preset region detection model to obtain at least one identification region.
For example, the living body tissue image may be specifically introduced into the region detection model and detected, and when the key feature of a certain region matches the feature of the discrimination region, the region detection model predicts the region as the discrimination region and outputs a corresponding prediction probability (i.e., a prediction probability of the discrimination region).
Wherein, the key feature refers to the distinctive feature of the distinguishing region (or called diagnostic region) compared with other regions, for example, the region generally surrounded by the physiological squamous column junction (the columnar epithelium in the cervix and the squamous epithelium at the periphery of the cervical orifice, the junction of the two epithelia becomes the squamous column junction; the physiological squamous column junction is clearly visible under colposcopy) and the original squamous column junction (the outer edge of the physiological squamous column junction extending to the squamous epithelium, which is called original squamous column junction) is called as cervical transformation region, so if the distinguishing region needing to be detected is the "cervical transformation region", the part surrounded by the "physiological squamous column junction" and the "original squamous column junction" can be used as the key feature, the key feature can be represented by a typical local rectangular frame, and the specific information includes the x offset (i.e. the horizontal coordinate offset) of the typical local rectangular frame, y offset (i.e., ordinate offset), width, and high parameter values.
It should be noted that the key features of different types of identification regions are different, and by setting different key features, identification regions that meet different application scenarios or requirements can be found, for example, in a scenario of cervical cancer pre-treatment and cancer diagnosis, a cervical transformation region can be used as an identification region, and the like.
Certainly, since specifications of the acquired living body tissue image, such as size, pixel and/or color channel, may be different, in order to facilitate detection of the region detection model and improve the detection effect, the acquired living body tissue image may be preprocessed, so that the image is normalized. That is, optionally, before the step of "performing key feature detection on the living tissue image by using the preset region detection model", the image recognition method may further include:
preprocessing the living body tissue image according to a prediction strategy, wherein the preprocessing may include image size scaling, color channel order adjustment, pixel adjustment, image normalization and/or image data arrangement adjustment, and specifically may be as follows:
image size scaling: scaling the size of the living body tissue image to a preset size; for example, the width may be specifically scaled to a preset size while the aspect ratio of the living tissue image is maintained, such as scaling the width to 600 pixels, and so on;
adjusting the color channel sequence: adjusting the color channel sequence of the living body tissue image to a preset sequence; for example, the three channels of the living body tissue image may be changed to the channel order of red (R, red), Green (G, Green), and Blue (B, Blue), of course, if the original channel order of the living body tissue image is already R, G and B, this operation is not needed;
adjusting pixels: processing pixels in the living body tissue image according to a preset strategy, for example, subtracting a full-image pixel mean value from each pixel in the living body tissue image, and the like;
normalizing the image: dividing each channel value of the living body tissue image by a preset coefficient, such as 255.0;
image data arrangement: the image data arrangement of the living body tissue image is set to a preset mode, for example, the image data arrangement is changed to channel priority, and the like.
After the living body tissue image is preprocessed, the preset region detection model can perform key feature detection on the preprocessed living body tissue image, that is, at this time, the step of "performing key feature detection on the living body tissue image by using the preset region detection model" may include: and detecting key features of the preprocessed living body tissue image by adopting a preset region detection model.
In addition, it should be noted that the region detection model may be trained from a plurality of images of the living body tissue sample labeled with the key features (only local labeling is needed); for example, the training may be specifically provided to the image recognition device after being trained by other devices, or the training may be performed by the image recognition device itself, and the training may be performed online or offline; that is, optionally, before the step of "performing key feature detection on the living tissue image by using the preset region detection model", the image recognition method may further include:
(1) and acquiring a plurality of life body tissue sample images marked with key features.
For example, a plurality of images of the living body tissue samples may be acquired, and then the acquired images of the living body tissue samples are labeled by using a neighborhood local typical region labeling method, so as to obtain a plurality of images of the living body tissue samples with labeled key features.
The acquisition ways can be various, for example, the acquisition can be performed from the internet, a specified database and/or a medical record, and the acquisition ways can be determined according to the requirements of practical application; similarly, the labeling mode may also be selected according to the requirements of the practical application, for example, manual labeling may be performed by a labeling auditor under the direction of a professional doctor, or automatic labeling may also be implemented by training a labeling model, and so on, which are not described herein again.
(2) And training a preset target detection model according to the image of the living body tissue sample to obtain a region detection model.
For example, a living body tissue sample image which needs to be trained currently is determined from a plurality of collected living body tissue sample images to obtain a current living body tissue sample image, then the current living body tissue sample image is guided into a preset target detection model to be trained to obtain a region predicted value corresponding to the current living body tissue sample image, then the region predicted value corresponding to the current living body tissue sample image and the labeled key feature of the current living body tissue sample image are converged (namely the predicted rectangular frame parameter is infinitely close to the labeled rectangular frame parameter), so as to adjust the parameter of the target detection model (the target detection model is trained once every time of adjustment), and the step of determining the living body tissue sample image which needs to be trained currently from the plurality of collected living body tissue sample images is returned, and obtaining the required region detection model until the plurality of the living body tissue sample images are trained.
The target detection model may be set according to requirements of actual applications, for example, the target detection model may include a depth residual network (ResNet) and a Regional recommendation network (RPN), and the like.
When the target detection model includes a depth residual error network and a region recommendation network, the step of "guiding the current living body tissue sample image into a preset target detection model for training to obtain a region prediction value corresponding to the current living body tissue sample image" may include:
and importing the current life body tissue sample image into a preset depth residual error network for calculation to obtain an output characteristic corresponding to the current life body tissue sample image, importing the output characteristic into a region recommendation network for detection to obtain a region prediction value corresponding to the current life body tissue sample image.
It should be noted that, in the same way as the detection for distinguishing the region of the living body tissue image, since the specifications of the collected living body tissue sample image, such as size, pixel and/or color channel, may be different, the collected living body tissue sample image may be preprocessed to normalize the image in order to facilitate the detection of the region detection model and improve the detection effect. That is, optionally, before the step "training a preset target detection model according to the image of the living tissue sample", the image recognition method may further include:
preprocessing the living body tissue sample image according to a prediction strategy, wherein the preprocessing comprises image size scaling, color channel sequence adjustment, pixel adjustment, image normalization and/or image data arrangement adjustment, and the preprocessing specifically comprises the following steps:
image size scaling: scaling the size of the living body tissue sample image to a preset size; for example, the width may be specifically scaled to a preset size while maintaining the aspect ratio of the living body tissue sample image, such as scaling the width to 600 pixels, and so on;
adjusting the color channel sequence: adjusting the color channel sequence of the life tissue sample image to a preset sequence; for example, the three channels of the living body tissue sample image may be changed to the channel order of R, G and B, of course, if the original channel order of the living body tissue sample image is R, G and B, then this operation is not needed;
adjusting pixels: processing pixels in the living body tissue sample image according to a preset strategy, for example, subtracting a full image pixel mean value from each pixel in the living body tissue sample image, and the like;
normalizing the image: dividing each channel value of the living body tissue sample image by a preset coefficient, such as 255.0;
arranging image data: the image data arrangement of the living body tissue sample image is set to a preset mode, for example, the image data arrangement is changed to channel priority, and the like.
At this time, the step of "training the preset target detection model according to the living body tissue sample image" may include: and training a preset target detection model according to the preprocessed living body tissue sample image.
103. And identifying the type of the distinguishing area by adopting a preset area classification model.
For example, the image including the identified region may be specifically imported into the region classification model for identification, and the result of identification of the identified region may be output by the region classification model.
For example, taking the type identification of the cervical transformation zone as an example, after the image including the cervical transformation zone is imported into the region classification model, the region classification model identifies the region type characteristics of the cervical transformation zone, and outputs the three-dimensional probabilities of the cervical transformation zone, i.e., the probability of the transformation zone i, the probability of the transformation zone ii, and the probability of the transformation zone iii, for example, if the identification is performed, the probability of a certain cervical transformation zone being "transformation zone i" is predicted to be 80%, the probability of the certain cervical transformation zone ii "is predicted to be 15%, and the probability of the certain cervical transformation zone iii" is predicted to be 5%, then the region classification model may output the identification result: "conversion zone type I, 80%", "conversion zone type II, 15%" and "conversion zone type III, 5%".
The preset region classification model can be formed by training a plurality of region sample images marked with region type characteristics, specifically, the preset region classification model can be provided for the image recognition device after being trained by other equipment, or the preset region classification model can also be used for performing online or offline training by the image recognition device; that is, before the step of "identifying the type of the identified region using the preset region classification model", the image identification method may further include:
(1) and acquiring a plurality of region sample images marked with region type characteristics.
The manner of obtaining the region sample image labeled with the region type feature may be various, for example, any one of the following manners may be adopted:
mode one (sample image has labeled key features):
acquiring a plurality of life body tissue sample images marked with key features, intercepting a distinguishing area from the life body tissue sample images according to marks (namely marks of the key features) to obtain distinguishing area samples, and carrying out area type feature marking on the distinguishing area samples to obtain area sample images.
A second method (the sample image is labeled with key features or not labeled with key features):
the method comprises the steps of collecting a plurality of life body tissue sample images (the life body tissue sample images can be marked with key features or not), detecting the key features of the life body tissue sample images by adopting a preset region detection model to obtain at least one distinguishing region sample, and marking the distinguishing region sample with region type features to obtain a region sample image.
The labeling of the region type characteristics can be manually labeled by a labeling auditor under the pointing of a professional doctor, or automatic labeling can be realized by training a labeling model, and the like; the labeling rule of the region type feature may be determined according to the requirements of the practical application, for example, a rectangular box may be used to label the region type feature of the type identification region, and give the two-dimensional coordinates and the region size of the identification region, and so on.
For example, taking the cervical transformation zone as an example, transformation zone I mainly refers to the transformation zone located in the cervicovaginal region, and the complete cervical transformation zone can be seen, so that the regional type of transformation zone I is characterized by "cervicovaginal region", and the complete region can be seen; the transformation zone II is positioned in the cervical canal, and a complete cervical transformation zone can be seen through auxiliary tools such as a cervical canal dilator and the like, so that the transformation zone II is characterized by being in the cervical canal, and is characterized by being complete through auxiliary tools such as the cervical canal dilator and the like; the transformation zone III type means that the cervical transformation zone where the physiological squamous column boundary can not be seen by means of a tool, so that the regional type of the transformation zone III type is characterized by the characteristics of 'the physiological squamous column boundary can not be seen by means of a tool'.
(2) And training a preset classification model according to the area sample image to obtain an area classification model.
For example, the area sample images may be specifically input into a preset classification model for classification, to obtain a predicted classification result, such as a transformation area type i, a transformation area type ii, or a transformation area type iii, and the area type features of the predicted classification result and the labeled area type features are converged, so that one training can be completed, and by repeating the training for multiple times, until all the area sample images are trained, a finally required area classification model can be obtained.
104. Marking the position and the type of the identification area on the living body tissue image according to the identification result; for example, the following may be specific:
(1) and determining the type of the distinguishing area according to the recognition result, and acquiring the coordinates of the distinguishing area.
For example, the type and the confidence of the type of each recognition frame in the preset range in the recognition region may be determined according to the recognition result, the confidence of the type of each recognition frame in the preset range may be calculated by a non-maximum suppression algorithm (non-maximum suppression) to obtain the confidence of the preset range, and the type of the preset range with the highest confidence may be selected as the type of the recognition region.
Since there may be a plurality of recognition frames in the recognition result, and each recognition frame corresponds to a plurality of types and prediction probabilities of the types, a type having the highest prediction probability may be selected from the plurality of types of each recognition frame as the type of the recognition frame, and the highest prediction probability may be used as the confidence of the recognition frame.
After obtaining the type and the confidence of the type of each recognition frame, the confidence of the type of each recognition frame in the preset range may be calculated through a non-maximum suppression algorithm, for example, the confidence of the type of each recognition frame in the preset range may be compared, the original value of the maximum value is retained, other non-maximum values are set as minimum values, for example, (0.0), and finally the confidence of the preset range is obtained, then, the confidence of each preset range is ranked, and the type of the preset range with the highest confidence is selected as the type of the discrimination region according to the ranking.
(2) And marking the position of the identification area on the living body tissue image according to the coordinates, and marking the type of the identification area on the position.
For example, also taking the type identification of the cervical transformation zone as an example, if a certain identification region is identified as "transformation zone i type", the position of the cervical transformation zone can be marked on the colposcopy cervical image and marked as "transformation zone i type"; if a certain distinguishing area is identified as a transformation area II type, the position of the cervical transformation area can be marked on a cervical image of the colposcope at the moment and marked as the transformation area II type; similarly, if a certain identification region is identified as "transformation region type iii", the position of the cervical transformation region may be marked on the colposcopic cervical image, and labeled as "transformation region type iii", and so on.
Optionally, during the labeling, the specific coordinates of the identification region may be further labeled, and further, the prediction probability of the identification result may be further labeled, and of course, the prediction probability of the identification region may also be labeled.
As can be seen from the above, the embodiment can acquire a living body tissue image to be detected, then perform key feature detection on the living body tissue image by using a preset region detection model, recognize the type of at least one recognition region obtained by the detection by using a preset region classification model, and then mark the position and the type of the recognition region on the living body tissue image according to the recognition result for reference of medical personnel; according to the scheme, the identification region can be accurately marked out by using the trained region detection model, and the type of the identification region is identified in a targeted manner through the region classification model, so that the interference of other regions (namely non-identification regions) on type identification can be avoided, and the identification accuracy is improved; in addition, the region detection model is trained by a plurality of life body tissue sample images marked with key features without overall marking, so that compared with the existing scheme, the difficulty of marking is greatly reduced, the marking accuracy is improved, and the precision of the trained model is further improved; in a word, the scheme can greatly improve the accuracy and the recognition accuracy of the model and improve the recognition effect.
Example II,
The method described in the foregoing embodiment will be described in further detail below by way of example in which the image recognition apparatus is specifically integrated in a network device.
Firstly, the region detection model and the region classification model can be trained separately, and secondly, the region type identification can be performed on the tissue image of the detected living body through the trained region detection model and the trained region classification model, which will be described in detail below.
And (I) training a region detection model.
The network equipment collects a plurality of life body tissue sample images marked with key features, and then trains a preset target detection model according to the life body tissue sample images to obtain a region detection model. For example, the following may be specifically mentioned:
(1) the network device collects a plurality of images of the living body tissue sample from the internet, a specified database, and/or a medical record.
For example, if the area detection model is embodied as an area detection model of a cervical transformation area, the network device may acquire a plurality of colposcopic cervical sample images in the internet, a designated database and/or a medical record.
(2) And the network equipment adopts a neighborhood local typical region marking method to mark the collected multiple life body tissue sample images.
For example, manual labeling may be performed by a labeling auditor under the direction of a professional doctor, or automatic labeling may be implemented by training a labeling model, so as to obtain a plurality of life body tissue sample images labeled with key features.
For example, taking the region detection model as the region detection model of the cervical transformation area as an example, at this time, a typical region near the cervical opening may be specifically marked in the colposcopic cervical sample image, and corresponding identification frames such as rectangular frames and types are given.
For convenience of description, in the embodiment of the present invention, the identification frame corresponding to the typical region is referred to as a typical local identification frame, and for convenience of description, in the embodiment of the present invention, the identification frame is specifically a rectangular frame, that is, the typical local identification frame may be specifically a typical local rectangular frame, which may include at least an x coordinate (horizontal axis coordinate), a y coordinate (vertical axis coordinate), a width and a height of a colposcopic cervical image.
(3) The network device preprocesses the living body tissue sample image, such as a colposcopic cervical sample image, according to a prediction strategy, which may specifically be as follows:
image size scaling: scaling the size of the living body tissue sample image to a preset size; for example, the width may be specifically scaled to a preset size while maintaining the aspect ratio of the colposcopic cervical sample image, such as scaling the width to 600 pixels, and so on;
adjusting the color channel sequence: adjusting the color channel sequence of the life tissue sample image to a preset sequence; for example, the three channels of the colposcopic cervical sample image can be changed to the channel order of R, G and B, of course, if the original channel order of the living body tissue image is R, G and B, then this operation is not needed;
pixel adjustment: processing pixels in the living body tissue sample image according to a preset strategy, for example, subtracting a full-image pixel mean value from each pixel in the colposcopy cervical sample image, and the like;
normalizing the image: dividing each channel value of the image of the tissue sample of the living body by a preset coefficient, such as 255.0;
image data arrangement: the image data arrangement of the living body tissue sample image is set to a preset mode, for example, the image data arrangement of the colposcopic/cervical sample image is changed to channel priority, and the like.
(4) And the network equipment trains a region detection model by adopting the preprocessed living body tissue sample image.
For example, also taking the region detection model as a region detection model of the cervical transformation area as an example, the training process may specifically be as follows:
the network equipment determines a colposcopic cervical sample image which needs to be trained currently from a plurality of preprocessed colposcopic cervical sample images to obtain a current colposcopic cervical sample image, then, the current colposcopic cervical sample image is guided into a preset target detection model for training to obtain a region predicted value corresponding to the current colposcopic cervical sample image, and then the region predicted value corresponding to the current colposcopic cervical sample image and the labeled key features of the current colposcopic cervical sample image are converged, and adjusting parameters of the target detection model, and returning to the step of determining the current colposcopic cervical sample image needing to be trained in the collected colposcopic cervical sample images until the plurality of colposcopic cervical sample images are trained, so that the required cervical transformation area region detection model can be obtained.
The target detection model may be set according to the requirements of the actual application, for example, the target detection model may include a depth residual network (ResNet) and a regional recommendation network (RPN), and the like.
When the target detection model includes the depth residual error network and the region recommendation network, the step of "guiding the current colposcopic cervical sample image into a preset target detection model for training to obtain the region prediction value corresponding to the current colposcopic cervical sample image" may include:
firstly, a depth residual error network is constructed, the current colposcope cervical sample image is led into the depth residual error network for calculation, and output characteristics (convolution characteristics are used as the output of the depth residual error network, namely the output characteristics are convolution characteristics) corresponding to the current colposcope cervical sample image are obtained.
And secondly, constructing a regional recommendation network, and importing the output features corresponding to the current colposcope cervical sample image into the regional recommendation network for detection to obtain a regional prediction value (also called a recommendation region) corresponding to the current colposcope cervical sample image, wherein the specific expression of the regional prediction value can be a dimension vector of 'the size number of a preset rectangular frame, the width-to-height ratio number and the rectangular frame parameter number'.
Wherein the rectangular box parameters may include x offset (i.e., abscissa offset), y offset (i.e., ordinate offset), width, and high parameters.
And (II) training a region classification model.
The network device obtains a plurality of area sample images labeled with area type features, for example, specifically, a plurality of life body tissue sample images labeled with key features (such as colposcopic cervical sample images) can be collected, the identification area is intercepted from the life body tissue sample images according to the labeling to obtain identification area samples, or a plurality of life body tissue sample images can be collected, the key features of the life body tissue sample images are detected by using the area detection model trained in the step (a) to obtain at least one identification area sample, or the output features of the depth residual error network and the area prediction value output by the area recommendation network obtained in the area detection model training process can be directly used as the identification area samples, and then the identification area samples obtained in any one of the above manners are labeled with the area type features, and obtaining an area sample image.
After obtaining the region sample, the preset classification model may be trained according to the region sample image to obtain the region classification model, for example, the region sample image may be specifically input into the preset classification model to be classified to obtain a predicted classification result, then, the region type feature of the predicted classification result and the labeled region type feature are converged to complete one training, and so on, multiple times of training may be performed until all the region sample images are trained, and a finally required region classification model, for example, a region classification model of a cervical transformation region may be obtained.
The labeling of the region type characteristics can be manually labeled by a labeling auditor under the pointing of a professional doctor, or automatic labeling can be realized by training a labeling model, and the like; the labeling rule of the region type feature may be determined according to the requirements of the practical application, for example, a rectangular box may be used to label the region type feature of the type identification region, and give the two-dimensional coordinates and the region size of the identification region, and so on.
For example, taking the cervical transformation zone as an example, transformation zone i mainly refers to the transformation zone located in the cervicovaginal region, and the complete cervical transformation zone can be seen, so that the region type of transformation zone i is characterized by "cervicovaginal region", and is characterized by "complete visibility"; the transformation zone II is positioned in the cervical canal, and a complete cervical transformation zone can be seen through auxiliary tools such as a cervical canal dilator and the like, so that the transformation zone II is characterized by being in the cervical canal, and is characterized by being complete through auxiliary tools such as the cervical canal dilator and the like; the transformation zone III type means that the cervical transformation zone where the physiological squamous column boundary can not be seen by means of a tool, so that the regional type of the transformation zone III type is characterized by the characteristics of 'the physiological squamous column boundary can not be seen by means of a tool'.
It should be noted that, before the preset classification model is trained according to the region sample image, in order to further improve the processing effect, the region sample image may be normalized according to a prediction strategy (of course, if the region sample image meets the preset specification, this step may be omitted), and then the preset classification model is trained according to the normalized region sample image, where the specific strategy of normalization may be determined according to the requirements of actual application and is not described herein again.
And (III) identifying the type of the distinguishing area.
After the training of the region detection model and the region classification model is completed, the region detection model and the region classification model may be used to identify the region type, as shown in fig. 2a, the specific identification process may be as follows:
201. the network equipment collects the tissue image of the living body to be detected.
For example, the network device may specifically receive an image of a living tissue to be detected, which is sent by an image capturing device, where the image capturing device may include a medical monitoring device or a medical detection device, such as a colposcope or an endoscope.
The living body tissue image refers to an image of a certain component of a living body, such as an image of the intestines, stomach, heart, throat, vagina, and the like of a human body, and an image of the intestines, stomach, even the oral cavity, skin, and the like of a dog. For convenience of description, in this embodiment, the living body tissue image, specifically, the colposcopic cervical image will be described as an example
202. And the network equipment preprocesses the living body tissue image according to the prediction strategy.
For example, as shown in fig. 2b, taking the living body tissue image as a colposcopic cervical image specifically as an example, the preprocessing specifically may be as follows:
(1) image size scaling: scaling the size of a colposcopic cervical image to a preset size; for example, the width may be scaled to a preset size, such as 600 pixels, while maintaining the image aspect ratio;
(2) color channel sequence adjustment: adjusting the color channel sequence of the colposcopic cervical image to a preset sequence, such as R, G and B in sequence, of course, if the original channel sequence of the living body tissue image is R, G and B, the operation is not needed;
(3) pixel adjustment: processing pixels in the colposcopic-cervical image according to a preset strategy, for example, subtracting a full image pixel mean value from each pixel in the image, and the like;
(4) image normalization: dividing each channel value by a predetermined factor, such as 255.0;
(5) image data arrangement: the arrangement of the image data is set to a preset mode, for example, modified to channel priority, and the like.
203. And the network equipment adopts the trained area detection model to detect the key features of the preprocessed living body tissue image.
For example, the network device may specifically import the preprocessed living body tissue image into the region detection model for detection, and if the key feature of a certain region in the living body tissue image matches the key feature of the identified region, the region detection model predicts the region as the identified region, and outputs a corresponding prediction probability.
For example, since a region surrounded by the physiological scale column boundary and the original scale column boundary is generally referred to as a cervical transformation region, if a certain region to be detected is a "cervical transformation region", the region surrounded by the physiological scale column boundary and the original scale column boundary may be used as a key feature, and the key feature may be represented by a typical local rectangular frame, and specific information includes, for example, x offset (i.e., abscissa offset), y offset (i.e., ordinate offset), width and height parameter values of the typical local rectangular frame.
For example, taking the living body tissue image as a colposcopic cervical image, and the region detection model includes a depth residual network (ResNet) and a region recommendation network (RPN) as an example, as shown in fig. 2b, the network device may import the preprocessed colposcopic cervical image into a region detection model of the cervical transformation area for region detection, for example, the preprocessed colposcopic cervical image can be used as the input of the depth residual error network, the convolution characteristic is used as the output of the depth residual error network, the output characteristic corresponding to the preprocessed colposcopic cervical image is obtained, then, the output features are used as input of a region recommendation model, a dimension vector of 'size quantity of a preset rectangular frame, width-to-height ratio quantity and rectangular frame parameter quantity' is used as output, a predicted cervical transformation region is obtained, and optionally, a corresponding prediction probability can be output.
204. And the network equipment adopts the trained region classification model to identify the type of the distinguishing region.
For example, also taking the type identification of the cervical transformation zone as an example, as shown in fig. 2b, if the predicted cervical transformation zone and the corresponding features (output features of the depth residual error network) are already obtained in step 203, then the cervical transformation zone and the features can be trained as the input of the region classification model, and the three-dimensional probability of the cervical transformation zone, that is, the probability of the transformation zone i, the probability of the transformation zone ii, and the probability of the transformation zone iii, can be obtained.
For example, if it is predicted that a certain cervical transformation zone is 80% in probability of "transformation zone i", 15% in probability of "transformation zone ii", and 5% in probability of "transformation zone iii", the region classification model may output the recognition result: the conversion region I type, 80% "," conversion region II type, 15% ", and" conversion region III type, 5% ", and the corresponding recognition frames of each type such as a regression rectangular frame can also be output.
205. The network equipment determines the type of the distinguishing area according to the identification result and acquires the coordinates of the distinguishing area.
For example, the network device may specifically determine, according to the recognition result, the type and the confidence level of the type of each recognition frame in the preset range in the recognition region, calculate, by using a non-maximum suppression algorithm, the confidence level of the type of each recognition frame in the preset range to obtain the confidence level of the preset range, and then select the type of the preset range with the highest confidence level as the type of the recognition region.
Since there may be multiple recognition frames (such as regression rectangular frames) in the recognition result, and each recognition frame corresponds to multiple types and prediction probabilities of the types, a type with the highest prediction probability may be selected from the multiple types of each recognition frame as the type of the recognition frame, and the highest prediction probability may be used as the confidence of the recognition frame. For example, also taking the cervical transformation zone as an example, if a certain recognition box a belongs to 70% of the "transformation zone type i", 30% of the "transformation zone type ii" and 0% of the "transformation zone type iii", the "transformation zone type i" may be taken as the type of the recognition box a, and 70% may be taken as the confidence of the recognition box a.
After obtaining the type and the confidence of the type of each recognition frame, the confidence of the type of each recognition frame in the preset range may be calculated through a non-maximum suppression algorithm, for example, the confidence of the type of each recognition frame in the preset range may be compared, the original value of the maximum value is retained, and other non-maximum values are set as minimum values, for example, (0.0), to finally obtain the confidence of the preset range, then, the confidence of each preset range is ranked, and the type of the preset range with the maximum confidence is selected as the type of the discrimination region according to the ranking.
For example, taking a cervical transformation area as an example, if a certain preset range K1 of a certain cervical transformation area includes a recognition box a and a recognition box B, the type of the recognition box a is "transformation area type i", the confidence is 70%, the type of the recognition box B is "transformation area type ii", and the confidence is 80%, then at this time, it may be determined that the type of the preset range K1 is "transformation area type ii", and the confidence is 80%; similarly, if a predetermined range K2 of the cervical transformation zone includes a recognition box C and a recognition box D, the recognition box C is of the type "transformation zone type i", the confidence is 60%, the recognition box D is of the type "transformation zone type ii", the confidence is 40%, then at this time, the predetermined range K2 is determined to be of the type "transformation zone type i", and the confidence is 60%; the confidence degrees of the preset range K1 and the preset range K2 are ranked, and since the confidence degree of K1 is greater than that of K2, the type "transformation zone II type" of the preset range K1 is selected as the type of the cervical transformation zone.
206. And the network equipment marks the position of the identification area on the living body tissue image according to the coordinate and marks the type of the identification area on the position.
For example, also taking the type identification of the cervical transformation zone as an example, if a certain identification region is identified as "transformation zone i type", the position of the cervical transformation zone can be marked on the colposcopy cervical image and marked as "transformation zone i type"; if a certain distinguishing area is identified as 'transformation area II type', the position of the cervical transformation area can be marked on the colposcopy cervical image at the moment, and the distinguishing area is marked as 'transformation area II type'; similarly, if a certain identification region is identified as "transformation region type iii", the position of the cervical transformation region may be marked on the colposcopic cervical image, and labeled as "transformation region type iii", and so on.
Optionally, during the labeling, specific coordinates of the identification region may be further labeled, and further, the prediction probability of the identification result may be further labeled, and of course, the prediction probability of the identification region may also be labeled.
As can be seen from the above, the embodiment can acquire a living body tissue image to be detected, such as a colposcope cervical image, then perform key feature detection on the living body tissue image by using a preset region detection model, identify the type of at least one identification region obtained by the detection, such as a cervical transformation region, by using a preset region classification model, and then mark the position and type of the identification region on the living body tissue image according to the identification result for reference by medical staff; according to the scheme, the identification region can be accurately marked out by using the trained region detection model, and the type of the identification region is identified in a targeted manner through the region classification model, so that the interference of other regions (namely non-identification regions) on type identification can be avoided, and the identification accuracy is improved; in addition, the region detection model is trained by a plurality of life body tissue sample images marked with key features without overall marking, so that compared with the existing scheme, the difficulty of marking is greatly reduced, the marking accuracy is improved, and the precision of the trained model is further improved; in a word, the scheme can greatly improve the accuracy and the recognition accuracy of the model and improve the recognition effect.
Example III,
In order to better implement the above method, an embodiment of the present invention may further provide an image recognition apparatus, where the image recognition apparatus may be specifically integrated in a network device, and the network device may be a device such as a terminal or a server.
For example, as shown in fig. 3a, the image recognition apparatus may include an acquisition unit 301, a detection unit 302, a recognition unit 303, and an annotation unit 304, as follows:
(1) an acquisition unit 301;
the acquisition unit 301 is used for acquiring a tissue image of a living body to be detected.
For example, the capturing unit 301 may be specifically configured to receive an image of a living tissue to be detected sent by an image capturing device, where the image capturing device may include a medical detection device, such as a colposcope or an endoscope, or may also include a medical monitoring device, and the like.
The living body tissue image to be detected means a living body tissue image to be detected, and the living body tissue image refers to an image of a certain component of a living body, such as an image of the intestines, stomach, heart, throat, vagina and the like of a human body.
(2) A detection unit 302;
the detecting unit 302 is configured to perform key feature detection on the living body tissue image by using a preset region detection model to obtain at least one distinguishing region.
For example, the detecting unit 302 may be specifically configured to introduce the living tissue image into the region detection model for detection, and if the key feature of a certain region matches the feature of the identified region, the region detection model predicts the region as the identified region, and outputs a corresponding prediction probability.
The different types of identification regions have different key features, and the identification regions which meet different application scenes or requirements can be found by setting different key features, for example, a cervical transformation region can be used as the identification region in the scenes of cervical cancer pre-diagnosis and cancer diagnosis, and the like.
Optionally, because specifications, such as sizes, pixels, color channels, and/or the like, of the acquired living body tissue image may be different, in order to facilitate detection of the region detection model and improve the detection effect, the acquired living body tissue image may be preprocessed, so that the image is normalized. That is, as shown in fig. 3b, the image recognition apparatus may further include a preprocessing unit 305 as follows:
the preprocessing unit is configured to perform preprocessing on the living body tissue image according to a prediction policy, where the preprocessing includes image size scaling, color channel order adjustment, pixel adjustment, image normalization, and/or image data arrangement adjustment, and the preprocessing may specifically refer to the foregoing embodiments and is not described herein again.
At this time, the detection unit 302 may be specifically configured to perform key feature detection on the preprocessed living tissue image by using a preset region detection model.
(3) An identification unit 303;
the identifying unit 303 is configured to identify the type of the identified region by using a preset region classification model, where the preset region classification model is formed by training a plurality of region sample images labeled with region type features.
For example, the identifying unit 303 may be specifically configured to import the image including the identified region into the region classification model for identification, and output the identification result of the identified region by the region classification model.
For example, taking the identification of the type of the cervical transformation zone as an example, after the image including the cervical transformation zone is imported into the region classification model, the region classification model identifies the region type characteristics of the cervical transformation zone and outputs the three-dimensional probabilities of the cervical transformation zone, i.e., the probability of the transformation zone i, the probability of the transformation zone ii, and the probability of the transformation zone iii.
(4) An annotation unit 304;
and the labeling unit 304 is used for labeling the position and the type of the identification area on the living body tissue image according to the identification result.
For example, the annotation unit can include an acquisition subunit and an annotation subunit, as follows:
an acquisition subunit operable to determine a type of the discrimination region from the recognition result, and acquire coordinates of the discrimination region.
For example, the obtaining subunit may be specifically configured to determine, according to the recognition result, the type and the confidence level of the type of each recognition frame in a preset range in the recognition area, calculate, by using a non-maximum suppression algorithm, the confidence level of each type of each recognition frame in the preset range to obtain the confidence level of the preset range, select the type of the preset range with the highest confidence level as the type of the recognition area, and obtain the coordinates of the recognition area.
And the marking subunit is used for marking the position of the identification area on the living body tissue image according to the coordinate and marking the type of the identification area on the position.
For example, also taking the type identification of the cervical transformation area as an example, if a certain identification area is identified as "transformation area type i", the labeling subunit may label the position of the cervical transformation area on the colposcopic cervical image and label the position as "transformation area type i" at this time; if a certain distinguishing area is identified as 'transformation area II type', the marking subunit can mark the position of the cervical transformation area on the colposcopy cervical image and mark the cervical transformation area as 'transformation area II type'; similarly, if a certain distinguishing area is identified as "transformation area type iii", the labeling subunit may label the position of the cervical transformation area on the colposcopy cervical image, and label the position as "transformation area type iii", and so on.
Optionally, during the labeling, the labeling subunit may further label a specific coordinate of the identification region, further, may label a prediction probability of the identification result, and certainly, may label the prediction probability of the identification region.
It should be noted that, the region detection model may be trained from a plurality of images of the living body tissue samples labeled with the key features; for example, the training may be specifically performed by other devices and then provided to the image recognition apparatus, or the image recognition apparatus may perform online or offline training by itself, that is, as shown in fig. 3b, the image recognition apparatus may further include a first training unit 306, as follows:
the acquiring unit 301 may be further configured to acquire a plurality of life tissue sample images labeled with key features.
For example, the acquiring unit 301 may be specifically configured to acquire a plurality of images of a living body tissue sample, and label the acquired plurality of images of the living body tissue sample by using a neighborhood local typical region labeling method to obtain a plurality of images of the living body tissue sample labeled with a key feature.
The acquisition ways can be various, for example, the acquisition can be performed from the internet, a specified database and/or a medical record, and the acquisition ways can be determined according to the requirements of practical application; similarly, the labeling mode may also be selected according to the requirements of the practical application, for example, manual labeling may be performed by a labeling auditor under the direction of a professional doctor, or automatic labeling may also be implemented by training a labeling model, and so on, which are not described herein again.
The first training unit 306 may be configured to train a preset target detection model according to the image of the living tissue sample, so as to obtain an area detection model.
For example, the first training unit 306 may be specifically configured to determine, from a plurality of collected living body tissue sample images, a living body tissue sample image that needs to be currently trained, obtain a current living body tissue sample image, introduce the current living body tissue sample image into a preset target detection model for training, obtain a region prediction value corresponding to the current living body tissue sample image, converge the region prediction value corresponding to the current living body tissue sample image and a key feature of a label of the current living body tissue sample image, so as to adjust a parameter of the target detection model, and return to the step of determining, from the plurality of collected living body tissue sample images, the living body tissue sample image that needs to be currently trained until all of the plurality of living body tissue sample images are trained.
The target detection model may be set according to the requirements of the actual application, for example, the target detection model may include a depth residual error network and a region recommendation network, and the like.
When the target detection model includes a depth residual error network and a region recommendation network, the first training unit may be specifically configured to import the current living body tissue sample image into a preset depth residual error network for calculation, so as to obtain an output feature corresponding to the current living body tissue sample image; and importing the output characteristics into a regional recommendation network for detection to obtain a regional prediction value corresponding to the current life tissue sample image.
It should be noted that, in the same way as the detection for distinguishing the region of the living body tissue image, since the specifications of the collected living body tissue sample image, such as size, pixel and/or color channel, may be different, the collected living body tissue sample image may be preprocessed to normalize the image in order to facilitate the detection of the region detection model and improve the detection effect. Namely:
the preprocessing unit may be further configured to preprocess the image of the living tissue sample according to a prediction policy, where the preprocessing may include image size scaling, color channel order adjustment, pixel adjustment, image normalization, and/or image data arrangement adjustment, which may be specifically referred to in the foregoing embodiments and is not described herein again.
Similarly, the preset region classification model may be trained from a plurality of region sample images labeled with region type features, and specifically may be provided to the image recognition apparatus after being trained by other devices, or may also be trained online or offline by the image recognition apparatus, that is, optionally, as shown in fig. 3b, the image recognition apparatus may further include a second training unit 307, as follows:
the acquiring unit 301 may further be configured to acquire a plurality of area sample images labeled with area type features.
The second training unit 307 may be configured to train a preset classification model according to the region sample image, so as to obtain a region classification model.
The manner of obtaining the region sample image labeled with the region type feature may be various, for example, any one of the following manners may be adopted:
the collecting unit 301 may be specifically configured to collect a plurality of life tissue sample images labeled with key features, intercept a discrimination area from the life tissue sample image according to the labeling to obtain a discrimination area sample, and perform area type feature labeling on the discrimination area sample to obtain an area sample image.
Or, the acquiring unit 301 may be specifically configured to acquire a plurality of images of a living body tissue sample, perform key feature detection on the images of the living body tissue sample by using a preset region detection model to obtain at least one distinguishing region sample, and perform region type feature labeling on the distinguishing region sample to obtain a region sample image.
The labeling of the region type characteristics can be manually labeled by a labeling auditor under the pointing of a professional doctor, or automatic labeling can be realized by training a labeling model, and the like; the labeling rule of the region type feature may be determined according to the requirements of the practical application, for example, a rectangular box may be used to label the region type feature of the type identification region, and give the two-dimensional coordinates and the region size of the identification region, and so on.
In a specific implementation, the above units may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and the specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
As can be seen from the above, the acquisition unit 301 of the image recognition apparatus in this embodiment may acquire a living body tissue image to be detected, then the detection unit 302 performs key feature detection on the living body tissue image by using a preset region detection model, the recognition unit 303 recognizes the type of at least one recognition region obtained by the detection by using a preset region classification model, and then the labeling unit 304 labels the position and type of the recognition region on the living body tissue image according to the recognition result for reference by medical personnel; according to the scheme, the identification region can be accurately marked out by using the trained region detection model, and the type of the identification region is identified in a targeted manner through the region classification model, so that the interference of other regions (namely non-identification regions) on type identification can be avoided, and the identification accuracy is improved; in addition, the region detection model is trained by a plurality of life body tissue sample images marked with key features without overall marking, so that compared with the existing scheme, the difficulty of marking is greatly reduced, the marking accuracy is improved, and the precision of the trained model is further improved; in a word, the scheme can greatly improve the accuracy and the recognition accuracy of the model and improve the recognition effect.
Example four,
The embodiment of the invention also provides a network device, which can be specifically a terminal or a server, and the network device can integrate any image recognition device provided by the embodiment of the invention.
For example, as shown in fig. 4, it shows a schematic structural diagram of a network device according to an embodiment of the present invention, specifically:
the network device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the network device architecture shown in fig. 4 does not constitute a limitation of network devices and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the network device, connects various parts of the entire network device by using various interfaces and lines, and performs various functions of the network device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the network device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the network device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The network device further includes a power supply 403 for supplying power to each component, and preferably, the power supply 403 is logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The network device may also include an input unit 404, where the input unit 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the network device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the network device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
the method comprises the steps of collecting a life body tissue image to be detected, detecting key features of the life body tissue image by adopting a preset region detection model to obtain at least one distinguishing region, identifying the type of the distinguishing region by adopting a preset region classification model, and marking the position and the type of the distinguishing region on the life body tissue image according to an identification result.
For example, the type of the identification region may be determined based on the identification result, coordinates of the identification region may be acquired, a position of the identification region may be marked on the living body tissue image based on the coordinates, and the type of the identification region may be marked at the position.
The region detection model may be trained from a plurality of images of the living body tissue samples labeled with the key features, and may be specifically provided to the network device after being trained by other devices, or the network device may perform online or offline training by itself, that is, the processor 401 may further run the application program stored in the memory 402, thereby implementing the following functions:
and acquiring a plurality of life body tissue sample images marked with key features, and training a preset target detection model according to the life body tissue sample images to obtain a region detection model.
The target detection model may be set according to requirements of an actual application, for example, the target detection model may include a depth residual network and a region recommendation network, and the like. When the target detection model comprises a depth residual error network and a region recommendation network, during training, the current living body tissue sample image can be specifically imported into a preset depth residual error network for calculation to obtain an output feature corresponding to the current living body tissue sample image, and then the output feature is imported into the region recommendation network for detection to obtain a region prediction value corresponding to the current living body tissue sample image.
The preset region classification model may be trained from a plurality of region sample images labeled with region type features, and specifically may be provided to the network device after being trained by other devices, or the network device may perform online or offline training by itself, that is, the processor 401 may further run an application program stored in the memory 402, thereby implementing the following functions:
and obtaining a plurality of area sample images marked with area type characteristics, and training a preset classification model according to the area sample images to obtain an area classification model.
Optionally, because specifications, such as sizes, pixels, color channels, and the like, of the acquired living body tissue image or the living body tissue sample image may be different, in order to facilitate detection of the region detection model and improve the detection effect, the acquired living body tissue image and the living body tissue sample image may be preprocessed, so that the images are normalized, that is, the processor 401 may further run an application program stored in the memory 402, thereby implementing the following functions:
and preprocessing the organism tissue image according to a prediction strategy.
And/or, preprocessing the image of the living body tissue sample according to a prediction strategy
The preprocessing may include image scaling, color channel order adjustment, pixel adjustment, image normalization, and/or image data arrangement adjustment.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
As can be seen from the above, the network device of this embodiment may acquire a living body tissue image to be detected, then perform key feature detection on the living body tissue image by using a preset region detection model, and identify the type of at least one identified region obtained by the detection by using a preset region classification model, and then mark the position and type of the identified region on the living body tissue image according to the identification result for reference by medical personnel; according to the scheme, the identification area can be accurately marked out by using the trained area detection model, and the type of the identification area is identified in a targeted manner through the area classification model, so that the interference of other areas (namely non-identification areas) on type identification can be avoided, and the identification accuracy is improved; in addition, the region detection model is trained by a plurality of life body tissue sample images marked with key features without overall marking, so that compared with the existing scheme, the difficulty of marking is greatly reduced, the marking accuracy is improved, and the precision of the trained model is further improved; in a word, the scheme can greatly improve the accuracy and the recognition accuracy of the model and improve the recognition effect.
Examples V,
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present invention provide a storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to execute steps of any one of the image recognition methods provided by the embodiments of the present invention. For example, the instructions may perform the steps of:
the method comprises the steps of collecting a life body tissue image to be detected, detecting key features of the life body tissue image by adopting a preset region detection model to obtain at least one distinguishing region, identifying the type of the distinguishing region by adopting a preset region classification model, and marking the position and the type of the distinguishing region on the life body tissue image according to an identification result.
For example, the type of the identification region may be determined based on the identification result, coordinates of the identification region may be acquired, a position of the identification region may be marked on the living body tissue image based on the coordinates, and the type of the identification region may be marked at the position.
The area detection model may be trained from a plurality of images of the living body tissue samples labeled with the key features, and may be specifically provided to the network device after being trained by other devices, or the network device may perform online or offline training by itself, that is, the instruction may further perform the following steps:
and acquiring a plurality of life body tissue sample images marked with key features, and training a preset target detection model according to the life body tissue sample images to obtain a region detection model.
The target detection model may be set according to requirements of an actual application, for example, the target detection model may include a depth residual network and a region recommendation network, and the like. When the target detection model comprises a depth residual error network and a region recommendation network, during training, the current living body tissue sample image can be specifically imported into a preset depth residual error network for calculation to obtain an output feature corresponding to the current living body tissue sample image, and then the output feature is imported into the region recommendation network for detection to obtain a region prediction value corresponding to the current living body tissue sample image.
The preset region classification model may be formed by training a plurality of region sample images labeled with region type features, and specifically may be provided to the network device after being trained by other devices, or may be trained online or offline by the network device itself, that is, the instruction may further perform the following steps:
and obtaining a plurality of area sample images marked with area type characteristics, and training a preset classification model according to the area sample images to obtain an area classification model.
Optionally, because specifications, such as sizes, pixels, color channels, and the like, of the acquired living body tissue image or the living body tissue sample image may be different, in order to facilitate detection of the region detection model and improve the detection effect, the acquired living body tissue image and the living body tissue sample image may be preprocessed, so that the images are normalized, that is, the instruction may further perform the following steps:
and preprocessing the living body tissue image according to a prediction strategy.
And/or, preprocessing the image of the living body tissue sample according to a prediction strategy
The preprocessing may include image scaling, color channel order adjustment, pixel adjustment, image normalization, and/or image data arrangement adjustment.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image recognition method provided by the embodiment of the present invention, the beneficial effects that can be achieved by any image recognition method provided by the embodiment of the present invention can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing detailed description has provided a method, an apparatus, and a storage medium for image recognition according to embodiments of the present invention, and the present disclosure has been made in detail by applying specific examples to explain the principles and embodiments of the present invention, and the description of the foregoing embodiments is only used to help understanding the method and the core concept of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. An image recognition method, comprising:
collecting a to-be-detected living body tissue image;
performing key feature detection on the living body tissue image by adopting a preset region detection model to obtain at least one distinguishing region, wherein the region detection model is formed by training a plurality of living body tissue sample images marked with key features, and the key features refer to unique significant features of the distinguishing region compared with other regions;
identifying the type of the identified region by adopting a preset region classification model to obtain an identification result, wherein the identification result comprises a plurality of identification frames, each identification frame corresponds to a plurality of prediction types and prediction probabilities of the prediction types, and the preset region classification model is formed by training a plurality of region sample images marked with region type characteristics;
determining the type and the confidence of the type of each recognition frame in a preset range in a distinguishing region according to a plurality of prediction types corresponding to each recognition frame in the recognition result and the prediction probability of the prediction types;
calculating the confidence coefficient of the type of each recognition frame in the preset range through a non-maximum suppression algorithm to obtain the confidence coefficient of the preset range, wherein the calculation comprises the following steps: comparing the confidence degrees of the types of the identification frames in the preset range, reserving the original value of the maximum value, and setting other non-maximum values as minimum values to obtain the confidence degree of the preset range;
selecting the type of the preset range with the maximum confidence coefficient as the type of the identification region, and acquiring the coordinate of the identification region;
and marking the position of the identification area on the living body tissue image according to the coordinates, and marking the type of the identification area on the position.
2. The method according to claim 1, wherein before the detecting the key features of the image of the living tissue by using the preset region detection model, the method further comprises:
preprocessing the living body tissue image according to a prediction strategy, wherein the preprocessing comprises image size scaling, color channel sequence adjustment, pixel adjustment, image normalization and/or image data arrangement adjustment;
adopting the detection model of the preset area to detect the key features of the image of the living body tissue, comprising the following steps: and detecting key features of the preprocessed living body tissue image by adopting a preset region detection model.
3. The method according to claim 1, wherein before the detecting the key features of the image of the living tissue by using the preset region detection model, the method further comprises:
collecting a plurality of life body tissue sample images marked with key features;
and training a preset target detection model according to the life body tissue sample image to obtain a region detection model.
4. The method of claim 3, wherein acquiring a plurality of vital body tissue sample images labeled with key features comprises:
acquiring a plurality of images of a living body tissue sample;
and marking the collected multiple life body tissue sample images by adopting a neighborhood local typical region marking method to obtain multiple life body tissue sample images marked with key features.
5. The method of claim 3, wherein training a pre-set target detection model from the live-body tissue sample image comprises:
determining a current life body tissue sample image needing to be trained from a plurality of collected life body tissue sample images to obtain a current life body tissue sample image;
importing the current life body tissue sample image into a preset target detection model for training to obtain a region predicted value corresponding to the current life body tissue sample image;
converging the area predicted value corresponding to the current life tissue sample image and the labeled key feature of the current life tissue sample image so as to adjust the parameters of the target detection model;
and returning to the step of determining the current life body tissue sample image needing to be trained in the collected multiple life body tissue sample images until all the multiple life body tissue sample images are trained.
6. The method according to claim 5, wherein the target detection model includes a depth residual error network and a region recommendation network, and the step of introducing the current living body tissue sample image into a preset target detection model for training to obtain a region prediction value corresponding to the current living body tissue sample image includes:
importing the current life tissue sample image into a preset depth residual error network for calculation to obtain output characteristics corresponding to the current life tissue sample image;
and importing the output characteristics into a regional recommendation network for detection to obtain a regional prediction value corresponding to the current life tissue sample image.
7. The method according to claim 1, wherein before the identifying the type of the distinguishing region using the preset region classification model, the method further comprises:
acquiring a plurality of area sample images marked with area type characteristics;
and training a preset classification model according to the region sample image to obtain a region classification model.
8. The method according to claim 7, wherein the obtaining a plurality of region sample images labeled with region type features comprises:
collecting a plurality of life body tissue sample images marked with key features;
intercepting a distinguishing area from the image of the living body tissue sample according to the label to obtain a distinguishing area sample;
and carrying out region type feature labeling on the identified region samples to obtain region sample images.
9. The method according to claim 7, wherein the obtaining a plurality of region sample images labeled with region type features comprises:
acquiring a plurality of images of a living body tissue sample;
performing key feature detection on the life body tissue sample image by adopting a preset region detection model to obtain at least one distinguishing region sample;
and carrying out region type feature labeling on the identified region samples to obtain region sample images.
10. An image recognition apparatus, characterized by comprising:
the acquisition unit is used for acquiring a tissue image of a to-be-detected living body;
the detection unit is used for detecting key features of the living body tissue image by adopting a preset region detection model to obtain at least one identification region, the region detection model is formed by training a plurality of living body tissue sample images marked with the key features, and the key features refer to unique significant features of the identification region compared with other regions;
the identification unit is used for identifying the type of the identification region by adopting a preset region classification model to obtain an identification result, the identification result comprises a plurality of identification frames, each identification frame corresponds to a plurality of prediction types and prediction probabilities of the prediction types, and the preset region classification model is formed by training a plurality of region sample images marked with region type characteristics;
the marking unit is used for determining the type and the confidence of the type of each recognition frame in a preset range in the identification region according to a plurality of prediction types corresponding to each recognition frame in the recognition result and the prediction probability of the prediction types; calculating the confidence coefficient of the type of each recognition frame in the preset range through a non-maximum suppression algorithm to obtain the confidence coefficient of the preset range, wherein the method comprises the following steps: comparing the confidence degrees of the types of the identification frames in the preset range, reserving the original value of the maximum value, and setting other non-maximum values as minimum values to obtain the confidence degree of the preset range; selecting the type of the preset range with the maximum confidence coefficient as the type of the identification region, and acquiring the coordinate of the identification region; and marking the position of the identification area on the living body tissue image according to the coordinates, and marking the type of the identification area on the position.
11. The apparatus of claim 10, further comprising a pre-processing unit;
the preprocessing unit is used for preprocessing the living body tissue image according to a prediction strategy, and the preprocessing comprises image size scaling, color channel sequence adjustment, pixel adjustment, image normalization and/or image data arrangement adjustment;
the detection unit is specifically used for detecting key features of the preprocessed living body tissue image by adopting a preset region detection model.
12. The apparatus of claim 10 or 11, further comprising a first training unit;
the acquisition unit is also used for acquiring a plurality of life body tissue sample images marked with key features;
the first training unit is used for training a preset target detection model according to the living body tissue sample image to obtain a region detection model.
13. The apparatus according to claim 10 or 11, further comprising a second training unit;
the acquisition unit is also used for acquiring a plurality of area sample images marked with area type characteristics;
and the second training unit is used for training a preset classification model according to the area sample image to obtain an area classification model.
14. A storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the image recognition method according to any one of claims 1 to 9.
CN201810724219.4A 2018-07-04 2018-07-04 Image recognition method, device and storage medium Active CN109002846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810724219.4A CN109002846B (en) 2018-07-04 2018-07-04 Image recognition method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810724219.4A CN109002846B (en) 2018-07-04 2018-07-04 Image recognition method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109002846A CN109002846A (en) 2018-12-14
CN109002846B true CN109002846B (en) 2022-09-27

Family

ID=64598747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810724219.4A Active CN109002846B (en) 2018-07-04 2018-07-04 Image recognition method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109002846B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136809B (en) 2019-05-22 2022-12-27 腾讯科技(深圳)有限公司 Medical image processing method and device, electronic medical equipment and storage medium
CN112147348A (en) * 2019-06-28 2020-12-29 深圳迈瑞生物医疗电子股份有限公司 Sample analyzer and sample testing application method
CN110517256B (en) * 2019-08-30 2022-02-15 重庆大学附属肿瘤医院 Early cancer auxiliary diagnosis system based on artificial intelligence
CN110737785B (en) * 2019-09-10 2022-11-08 华为技术有限公司 Picture labeling method and device
CN110613417A (en) * 2019-09-24 2019-12-27 浙江同花顺智能科技有限公司 Method, equipment and storage medium for outputting upper digestion endoscope operation information
CN112287772B (en) * 2020-10-10 2023-02-10 深圳市中达瑞和科技有限公司 Fingerprint trace detection method, fingerprint detection device and computer readable storage medium
CN116912247A (en) * 2023-09-13 2023-10-20 威海市博华医疗设备有限公司 Medical image processing method and device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105528607B (en) * 2015-10-30 2019-02-15 小米科技有限责任公司 Method for extracting region, model training method and device
CN106778005B (en) * 2016-12-27 2019-06-07 中南民族大学 Prostate cancer computer-aided detection system based on multi-parameter MRI
CN107895367B (en) * 2017-11-14 2021-11-30 中国科学院深圳先进技术研究院 Bone age identification method and system and electronic equipment
CN107945173B (en) * 2017-12-11 2022-05-24 深圳市宜远智能科技有限公司 Skin disease detection method and system based on deep learning
CN109190540B (en) * 2018-06-06 2020-03-17 腾讯科技(深圳)有限公司 Biopsy region prediction method, image recognition device, and storage medium

Also Published As

Publication number Publication date
CN109002846A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109002846B (en) Image recognition method, device and storage medium
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
US11151721B2 (en) System and method for automatic detection, localization, and semantic segmentation of anatomical objects
KR102014359B1 (en) Method and apparatus for providing camera location using surgical video
CN109117890B (en) Image classification method and device and storage medium
US20210225003A1 (en) Image processing method and apparatus, server, and storage medium
CN105518711A (en) In-vivo detection method, in-vivo detection system, and computer program product
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
RU2005133397A (en) AUTOMATIC SKIN DETECTION
KR102162683B1 (en) Reading aid using atypical skin disease image data
US20210059758A1 (en) System and Method for Identification, Labeling, and Tracking of a Medical Instrument
US20230206435A1 (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate
TW201222432A (en) System, device, method, and computer program product for facial defect analysis using angular facial image
CN112580404A (en) Ultrasonic parameter intelligent control method, storage medium and ultrasonic diagnostic equipment
CN115862819A (en) Medical image management method based on image processing
CN117877691B (en) Intelligent wound information acquisition system based on image recognition
CN113344926A (en) Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image
CN109003264B (en) Retinopathy image type identification method and device and storage medium
CN112991333B (en) Image processing method and system based on voice analysis in endoscopic surgery
CN115035086A (en) Intelligent tuberculosis skin test screening and analyzing method and device based on deep learning
CN115954106B (en) Tumor model optimizing system based on computer-aided simulation
EP4344642A1 (en) Computer-implemented method for setting x-ray acquisition parameters
US20240032856A1 (en) Method and device for providing alopecia information
CN116912122A (en) Method and device for repairing fat color of endoscope, storage medium and electronic equipment
WO2023220580A1 (en) Morphologic mapping and analysis on anatomic distributions for skin tone and diagnosis categorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210924

Address after: 518052 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Applicant after: Tencent Medical Health (Shenzhen) Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant