CN113902827B - System and method for predicting effect after healing of skin disease and electronic equipment - Google Patents

System and method for predicting effect after healing of skin disease and electronic equipment Download PDF

Info

Publication number
CN113902827B
CN113902827B CN202111454162.9A CN202111454162A CN113902827B CN 113902827 B CN113902827 B CN 113902827B CN 202111454162 A CN202111454162 A CN 202111454162A CN 113902827 B CN113902827 B CN 113902827B
Authority
CN
China
Prior art keywords
mask
segmentation
skin
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111454162.9A
Other languages
Chinese (zh)
Other versions
CN113902827A (en
Inventor
王璘
杨志文
贺婉佶
王欣
琚烈
戈宗元
王斌
赵昕
和超
陈羽中
张大磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Original Assignee
Shanghai Eaglevision Medical Technology Co Ltd
Beijing Airdoc Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eaglevision Medical Technology Co Ltd, Beijing Airdoc Technology Co Ltd filed Critical Shanghai Eaglevision Medical Technology Co Ltd
Priority to CN202111454162.9A priority Critical patent/CN113902827B/en
Publication of CN113902827A publication Critical patent/CN113902827A/en
Application granted granted Critical
Publication of CN113902827B publication Critical patent/CN113902827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention provides a system and a method for predicting the effect of skin diseases after healing, and an electronic device, wherein the system comprises: the skin region segmentation module comprises a threshold segmentation unit and a first segmentation network, wherein the threshold segmentation unit is used for carrying out threshold segmentation on an input image to obtain a skin region reference mask, and the first segmentation network is used for carrying out segmentation processing by utilizing the skin region reference mask and the input image to obtain a skin region mask; the focus segmentation module comprises a second segmentation network, wherein the second segmentation network is used for carrying out segmentation processing by utilizing the skin region mask and the input image to obtain a focus region mask; the area determining module is used for determining a mask of an area to be filled according to a preset filling proportion and the mask of the focus area; the completion module comprises a generation countermeasure network, a data processing module and a data processing module, wherein the generation countermeasure network is used for generating a post-cure effect map according to the mask of the area to be filled and the input image; the invention can generate the effect picture after healing aiming at the skin disease more efficiently and truly.

Description

System and method for predicting effect after healing of skin disease and electronic equipment
Technical Field
The present invention relates to the field of deep learning technologies, and in particular, to processing of images based on a deep learning technology, and more particularly, to a system and a method for predicting a post-cure effect for skin diseases, and an electronic device.
Background
Psoriasis, also known as psoriasis and psoriasis, is a chronic inflammatory skin disease, which is characterized by abnormal skin mass on the body. Often, these massive skin abnormalities are red, itchy, and desquamation, with minor symptoms manifested by localized patches of lesions, and severe effects on the skin throughout the body. Psoriasis can be divided into five major categories: lumpy, punctate, ruffled, pustular, and erythematous. Plaque typing is the most common type, to which about 90% of patients belong, and the affected area usually appears as a red patch with white dandruff on it. The course of psoriasis is long, the disease is mainly young and strong, the psoriasis tends to relapse easily, and some cases are not cured for all the time.
Similarly, some skin diseases, such as vitiligo and lupus erythematosus, have the characteristics of long treatment course and great influence on the physical health and mental conditions of patients. In treating these longer-term skin diseases, it is desirable to provide an adjunct that can exhibit its healing effect to motivate or adhere to its therapeutic belief. Wherein, providing the curing effect chart is a more intuitive way. By showing the curing effect picture for the patient, the treatment confidence can be improved and the treatment belief can be fixed.
However, there is no technology that can provide a cure effect map for skin diseases efficiently and as faithfully as possible.
Disclosure of Invention
Therefore, the present invention is directed to overcome the above-mentioned drawbacks of the prior art, and to provide a system, a method and an electronic device for predicting the effect of skin diseases after healing.
The purpose of the invention is realized by the following technical scheme:
according to a first aspect of the present invention, there is provided a system for predicting a post-cure effect on a skin disease, comprising: the skin region segmentation module comprises a threshold segmentation unit and a first segmentation network, wherein the threshold segmentation unit is used for carrying out threshold segmentation on an input image to obtain a skin region reference mask, the input image is an image containing a skin disease focus, and the first segmentation network is used for carrying out segmentation processing by utilizing the skin region reference mask and the input image to obtain a skin region mask; the focus segmentation module comprises a second segmentation network, and the second segmentation network is used for carrying out segmentation processing by using the skin region mask and the input image to obtain a focus region mask; the area determining module is used for determining a mask of an area to be filled according to a preset filling proportion and the mask of the focus area; and the completion module comprises a generation countermeasure network, and the generation countermeasure network is used for generating a post-cure effect map according to the mask of the area to be filled and the input image.
In some embodiments of the present invention, the threshold segmentation unit generates a YCrCb color space image based on the input image, and performs threshold segmentation on chrominance channels in the YCrCb color space image according to a predetermined segmentation threshold range to obtain the skin region reference mask.
In some embodiments of the present invention, the first split network employs a U-shaped split network, and the first split network is iteratively trained by: acquiring a first training set and a skin region reference mask, wherein the first training set comprises a plurality of image samples and first labels corresponding to the image samples, the first labels are used for indicating whether corresponding pixels in the image samples corresponding to the first labels belong to skin regions, each image sample corresponds to one skin region reference mask, and the skin region reference mask is a reference value obtained in a threshold segmentation mode and used for indicating whether the corresponding pixels in the image samples belong to the skin regions; training a first segmentation network to perform segmentation processing by using the image sample and a skin area reference mask corresponding to the image sample, and outputting a skin area mask; and calculating a first cross entropy loss value based on the output skin area mask and the corresponding first label, and updating the parameters of the first segmentation network according to the first cross entropy loss value.
In some embodiments of the present invention, the second segmentation network adopts a segmentation model combining a Resnet network and a U-shaped segmentation network, and is iteratively trained by: acquiring a first training set and a skin region mask corresponding to the first training set, wherein the first training set further comprises second labels corresponding to the plurality of image samples, and the second labels are used for indicating whether corresponding pixels in the image samples corresponding to the second labels belong to a focus region; training a second segmentation network to perform segmentation processing by using the image sample and a skin region mask corresponding to the image sample, and outputting a focus region mask; and calculating a second cross entropy loss value based on the output focus area mask and the corresponding second label, and updating the parameters of the second segmentation network according to the second cross entropy loss value.
In some embodiments of the invention, when calculating the second cross entropy loss value based on the output lesion area mask and the corresponding second label, the cross entropy loss of the non-skin area is ignored using the skin area mask.
In some embodiments of the invention, the region determination module is configured to: determining the area of a focus area according to the number of pixel points belonging to the focus area in a focus area mask corresponding to the input image; and using corrosion treatment to check the focus area for corrosion until the ratio of the area of the corroded focus to the area of the focus area reaches a preset filling proportion, and obtaining the mask of the area to be filled.
In some embodiments of the invention, the generating a competing network comprises generating a network and a discriminator, the generating a competing network being trained in a competing manner: acquiring a second training set, wherein the second training set comprises a plurality of original images without focuses, images of the original images after random smearing processing, and a smearing region mask for marking random smearing positions, and the smearing region mask is used for simulating a region mask to be filled; training a generating network to generate a compensation graph according to the smearing region mask and the image subjected to random smearing processing; a training discriminator judges the authenticity score of each pixel in the compensation image according to the corresponding original image; the parameters of the generating network are optimized based on the L1 loss of the complementary map and the corresponding original image and the authenticity score of the complementary map.
In some embodiments of the invention, the skin disorder is psoriasis, vitiligo or lupus erythematosus.
According to a second aspect of the present invention there is provided a method of generating a map of the effects after healing of a skin condition, comprising: acquiring an image of an affected part of a dermatologic patient as an input image, and generating a corresponding post-healing effect map of the input image by using the system of the first aspect.
According to a third aspect of the invention, an electronic device comprises: one or more processors; and a memory, wherein the memory is to store executable instructions; the one or more processors are configured to implement the method of the second aspect via execution of the executable instructions.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram illustrating the connection of modules of a system for predicting the effect of skin diseases after healing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system for predicting a post-cure effect of a skin disorder according to an embodiment of the present invention to obtain a skin area mask;
FIG. 3 is a diagram illustrating a training process of a second segmentation network in the system for predicting the effect of skin diseases after healing according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the results of segmenting lesions with a second segmentation network in a post-cure outcome prediction system for skin diseases according to an embodiment of the present invention;
fig. 5 is a diagram of the predicted healing effect of the psoriasis patient by the system for predicting the healing effect of the skin disease according to the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As mentioned in the background section, for some patients without confidence, it is desirable to provide an auxiliary means for stimulating or firming the treatment beliefs, but there is no technology that can efficiently and truly provide a cure image of skin diseases, especially for skin patients with long treatment periods, and such technology is particularly needed to help firming the treatment beliefs. In view of this situation, the inventors have studied and made various improvements using the deep learning technique to achieve the above-described needs. In summary, the present invention mainly includes two parts, namely, the segmentation of the focus area and the filling of the focus area with normal skin. Specifically, the inventor improves the four aspects of skin region segmentation, focus segmentation, filling region determination and focus region completion so as to better predict the effect after healing and enhance the confidence of the patient. In view of the above four aspects, the inventor has designed a system for predicting the effect after healing of a skin disease, and when a doctor is performing a spot treatment on a certain type of skin disease patient, the doctor can take one image of the affected part of the skin disease patient, input the image as an input image into the system of the present invention, and output the predicted effect after healing after processing. The doctor can display the predicted effect picture after the cure to the patient on the spot, and the treatment confidence of the patient is enhanced.
The following description is made with reference to fig. 1 for the modules corresponding to these four parts:
skin area segmentation module
The skin has typical color characteristics and can therefore be pre-extracted using conventional image processing means. However, the conventional image processing method is weaker than the deep neural network method in the aspect of understanding the relation between the semantic level and the pixel points. In addition, a ready-made data set is difficult to obtain for the images of the skin diseases and the healing images thereof, and only a small-batch data set is self-made, but the deep neural network is easy to be over-fitted compared with the traditional method under the condition that the training data set is limited.
Thus, referring to fig. 1, the present invention combines threshold segmentation with neural networks to improve the performance of skin region segmentation with a limited number of training samples.
According to an embodiment of the present invention, the skin region segmentation module includes a threshold segmentation unit configured to perform threshold segmentation on an input image to obtain a skin region reference mask, the input image is an image including a skin lesion, and a first segmentation network configured to perform segmentation processing using the skin region reference mask and the input image to obtain the skin region mask. Preferably, the first segmentation network performs segmentation processing on the image in which the skin region reference mask and the input image are superimposed in the channel dimension to obtain the skin region mask.
According to an embodiment of the invention, referring to fig. 2, the skin region segmentation module comprises the steps of:
k1, converting the input image from an original RGB color space into a YCrCb color space to obtain a YCrCb color space image, wherein Y represents a brightness channel, and CrCb represents a chrominance channel and is used for describing color and saturation;
k2, according to a preset segmentation threshold range (also called a preset skin color range), carrying out threshold segmentation on a chrominance channel (CrCb) to obtain a skin area reference mask (mask); judging whether the value of the corresponding pixel point in the chrominance channel belongs to a preset segmentation threshold range, if so, setting the value of the pixel point in the skin region reference mask as 1, otherwise, setting the value of the pixel point as 0;
k3, the skin region reference mask and the input image are input into the trained first segmentation network (also referred to as a skin segmentation model), and a final skin region mask (also referred to as a skin segmentation region) is obtained.
According to one embodiment of the invention, the first segmentation network in the skin region segmentation module uses, for example, a U-Net network. It should be noted that other split networks may be suitable, such as U-Net + +.
According to one embodiment of the invention, a first training set is employed in training a first segmented network. The process of the first training set generation is, for example: acquiring affected part images of a plurality of patients as image samples, marking whether corresponding pixels in each image sample belong to skin areas or non-skin areas (such as backgrounds, clothes and the like) to obtain a first label. For example, if the value of a certain pixel position in the first label is 0, it indicates that the pixel corresponding to the pixel position belongs to a non-skin area; if the value of a certain pixel position in the first label is 1, it indicates that the pixel at the corresponding pixel position belongs to the skin region.
According to an embodiment of the present invention, when training the first segmentation network, the labeled skin segmentation label (i.e., the first label) is used as a target, a skin region reference Mask (Mask) and an image sample are used as model inputs, and the cross entropy is used as a loss function to train the network. The skin region reference mask and the image sample may be superimposed and then subjected to a segmentation process, for example, preferably, the first segmentation network is iteratively trained in the following manner: acquiring an image sample and a skin area reference mask thereof, superposing the image sample and the skin area reference mask on channel dimensions to obtain a superposed image (for example, an input image is 3 channels of RGB, the skin area reference mask is 1 channel, and a superposed image of 4 channels is obtained after superposition), training a first segmentation network to segment the superposed image to obtain an output skin area mask; calculating a first cross entropy loss value based on the output skin area mask and the corresponding first label, calculating a gradient according to the first cross entropy loss value, and reversely transmitting and updating parameters of the first segmentation network until the first segmentation network is trained to be converged to obtain the trained first segmentation network. The trained first segmentation network is deployed into the skin region segmentation module of the system of the present invention.
The invention segments the skin in advance, reduces irrelevant interference existing in the subsequent lesion segmentation, improves the segmentation accuracy and is suitable for the condition of smaller data set.
(II) focus segmentation module
In some embodiments of the present invention, the lesion segmentation module comprises a second segmentation network, which may be a combined segmentation model of a Resnet network and a U-shaped segmentation network, wherein a downsampled portion of the U-shaped segmentation network is changed to the Resnet network. Preferably, the downsampling part of the U-Net network is changed into a Resnet50 network in the segmentation model. The Resnet50 network has good feature extraction capability and is widely used as a backbone network in the task of feature extraction. The U-Net network is one of U-shaped segmentation networks, is also a fully verified classical network, and is widely applied to the field of medical image segmentation. Therefore, the invention uses the two combined models as a segmentation model of the focus to obtain a better segmentation effect on the focus segmentation. However, other implementations are possible in which a segmentation model of similar functionality is used to segment the lesion.
In training a second segmentation network that combines a Resnet network with a U-shaped segmentation network, a second label indicating lesion areas and non-lesion areas in the corresponding image sample may be used, according to an embodiment of the present invention. Therefore, when the first training set is made, a second label corresponding to each image sample needs to be added. For example, after the image of the patient is taken as the image sample, whether the corresponding pixel in each image sample belongs to the lesion region or the non-lesion region is marked. Wherein, if the value of a certain pixel position in the second label is 0, it indicates that the pixel at the corresponding pixel position belongs to a non-focus area; if the value of a certain pixel position in the second label is 1, it indicates that the pixel corresponding to the pixel position belongs to the lesion area. Due to the fact that the external manifestations of different skin diseases are not consistent, in order to guarantee the accuracy of the model, aiming at psoriasis, leucoderma or lupus erythematosus, corresponding first training sets can be respectively manufactured, then the model is respectively trained, and a corresponding second segmentation network is constructed.
According to an embodiment of the invention, when training the second segmentation network combining the Resnet network and the U-shaped segmentation network, the image sample and the skin area mask thereof are obtained, the second segmentation network is trained to perform segmentation processing by using the image sample and the skin area mask corresponding to the image sample, and the focus area mask is output. For example, multiplying an image sample by pixel points at the same position in a skin region mask thereof to obtain a multiplied image, inputting the multiplied image into a second segmentation network for segmentation processing, and outputting a focus region mask; and calculating a second cross entropy loss value based on the output focus region mask and the corresponding second label, calculating a gradient according to the second cross entropy loss value, and reversely transmitting and updating parameters of the focus segmentation model until the second segmentation network is trained to be converged to obtain a trained second segmentation network. The trained second segmentation network is deployed into a lesion segmentation module of the system of the present invention. Preferably, when the second cross entropy loss value is calculated based on the output lesion area mask and the corresponding second label, the cross entropy loss of the non-skin area is ignored by using the skin area mask. Referring to fig. 3, that is, when calculating the loss value, the skin area mask is input, and when calculating the loss of each pixel point, the value of the pixel position corresponding to the pixel point in the skin area mask is multiplied, and since the value of the non-skin area is 0, the cross entropy loss of the non-skin area can be ignored. Therefore, the trained second segmentation network can pay more attention to the classification effect in the skin region, and the identification precision of the focus is improved.
The inventor carries out experiments aiming at the lesion segmentation of psoriasis, and the adopted network structure is that the downsampling part of a U-Net network is changed into a segmentation model of a Resnet50 network, and 600 images are marked, wherein 450 images are used for training, and 150 images are used for testing. Using the Mean intersection ratio (Mean IOU) as an evaluation index for the segmentation, the model results were in the test set with an index of 74.7%. Referring to fig. 4, an image taken of a psoriasis patient is shown as a left image as an input image, and a right image is a schematic effect image of a segmentation result (in which the segmentation result and the input image are overlapped to observe the effect), wherein a gray-white region is a lesion region of the psoriasis obtained by the segmentation.
(III) region determination Module
In order to show the predicted effect image after the symptom is relieved, a region filled with normal skin for a focus region needs to be determined, and the focus region is used as a control parameter for filling the region part in percentage.
According to one embodiment of the invention, the area determination module determines the area to be padded according to the following steps:
t1, determining the area of the focus area according to the number of the pixel points;
t2, setting a filling proportion beta;
t3, eroding the lesion area from the outer edge of each lesion (e.g., one pixel at a time and one revolution around the lesion) using a size 1 erosion processing Kernel (Kernel);
t4, judgment: and (3) whether the area of the corroded focus is not less than or equal to beta is satisfied, if so, stopping the corrosion operation, and if the corroded area is the area to be filled (the area is the area of the corroded focus), obtaining a mask of the area to be filled, otherwise, turning to T3. For example, in the mask of the area to be filled, 0 indicates that the pixel at the pixel position is not to be filled, and 1 indicates that the pixel at the pixel position is to be filled. The process can simply and efficiently simulate the state that the focus is reduced from large to small when the skin disease is cured. Alternatively, it is also possible to judge: 1-whether the remaining area of the focus after corrosion is divided by the total area of the focus area to be more than or equal to beta is satisfied, and equivalently judging whether the ratio of the area of the corroded focus to the total area of the focus area reaches a preset filling proportion.
It should be understood that other possible alternatives for determining the area to be filled may be used, for example, etching cores of 2x2 size; for another example, all lesion regions are divided into two parts, one part of the lesion region starts erosion from the inside, and the other part starts erosion from the outer edge.
(IV) completion module
The focus area is filled with various methods, for example, features of an adjusted image are manually designed by a machine learning or image processing method, so that the focus area is faded, but the result generated by the method is often unnatural, so that the final effect image after healing is not real enough. Therefore, the invention fills the focus area by using an image completion method based on deep learning.
In some embodiments of the invention, the completion module uses a model that generates a confrontation network as lesion area completion, the generating the confrontation network including the generating network and a discriminator (also referred to as a discrimination network). The input of the generated network comprises an original image (namely an input image) and a mask to be filled (namely a mask of an area to be filled), and the generated network is output as a filled picture (namely a result image after healing). Before application, the generation network and the discriminator are in confrontation training with each other, and the capability of judging whether each pixel point in the picture generated by the generation network is real or not by the discriminator is improved under the condition of improving the reality of the image generated by the generation network.
According to one embodiment of the invention, the generation of the competing network is trained by a second training set. Because a large number of recovery images at different periods are collected and are made into a data set in a standard way, which is difficult to realize, the applicant adopts an alternative means, namely, a plurality of sample images without a focus are collected as original images, the currently collected original images are used as labels, and the images of the original images are randomly smeared to eliminate a part of the area are used as training image samples, so as to quickly obtain a required training set, wherein the randomly smeared area is a skin area in the sample images. Preferably, the second training set includes a plurality of original images not including the lesion, images subjected to random smearing processing, and a smearing region mask for marking a random smearing position, wherein a smearing region in the images subjected to the random smearing processing is a skin region, and the smearing region mask is used for simulating a region mask to be filled. For example, a sample image without a lesion may be used as an original image, a circle is drawn in a skin region of the original image, and the circled region is smeared (for example, painted in white); a painted area mask is also formed where the circles are drawn, where 0 represents a non-painted area and 1 represents a painted area.
According to one embodiment of the invention, the generation of the countermeasure network is trained in the following manner: acquiring a second training set, wherein the second training set comprises a plurality of original images, images subjected to random smearing processing and smearing region masks for marking random smearing positions; training a generating network to generate a compensation graph according to the smearing region mask and the image subjected to random smearing processing; the training discriminator judges the authenticity score of each pixel in the compensation image according to the corresponding original image; the parameters of the generating network are optimized based on the L1 loss of the complementary map and the corresponding original image and the authenticity score of the complementary map. Therefore, the average authenticity score of all pixels in the subsequently generated complementary image is improved. The generated countermeasure network obtained through countermeasure training has a more real filling effect on the filled area. And iterating and competing in the way until the discriminator cannot distinguish whether the pixel points in the generated picture are real or not. A process of generating a compensation image, namely, the random smearing position in the image after the random smearing treatment is compensated, and the original form of the image is predicted; this process is equivalent to a process of simulating the predicted effect after healing. Preferably, the training for generating the countermeasure network may adopt a training method in a Free-form Image inpainting with Gated Convolution (Free-form Image restoration) method, in which the discriminator is based on a spectrum normalized markov discriminator (SN-PatchGAN) structure combined with Gated Convolution (Gated Convolution), so that the Convolution focuses on valid pixel points without being disturbed by pixel points to be padded.
Based on the structure, the invention realizes a visualization method for improving the skin effect after treatment through the system, can visually display the state of disease alleviation after treatment, is helpful for helping patients to enhance the confidence, has more efficient and real prediction results, and is displayed to the patients on the spot by doctors, thereby stimulating or firming the treatment belief. Figure 5 shows a graph of the predicted healing effect for two psoriasis patients, three at the fill-in ratio, 50%, 75% and 100% respectively.
There is also provided, in accordance with an embodiment of the present invention, a method for generating a post-cure effect map of a skin condition, including: an affected part image of a skin disease patient is acquired as an input image, and a post-cure effect map corresponding to the input image is generated by using the post-cure effect prediction system for skin diseases according to the foregoing embodiment. For example, a doctor takes a mobile phone to shoot an image of an affected part of a psoriasis patient, transmits the image to a post-cure effect prediction system for psoriasis production, the system generates a post-cure effect image for the image of the affected part and feeds the image back to a mobile phone of the doctor, and the doctor shows the image to the patient to enhance the confidence of the patient. Of course, a special electronic device may be manufactured, which is integrated with a camera and the system for predicting the effect after healing of skin diseases of the present invention, and the image of the affected part captured by the camera is processed by the system, and the image of the affected part after healing is output.
Furthermore, it should be understood that while the present invention was originally developed for the treatment of longer-term skin disorders, the above systems and methods were developed. However, it should be understood that for skin diseases with short disease duration, if the corresponding system is prepared in advance according to the invention, the effect map of the skin disease after healing can still be generated specifically.
It should be noted that, although the steps are described in a specific order, the steps are not necessarily performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order as long as the required functions are achieved.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. A system for predicting a post-cure effect on a skin disorder, comprising:
the skin region segmentation module comprises a threshold segmentation unit and a first segmentation network, wherein the threshold segmentation unit is used for carrying out threshold segmentation on an input image to obtain a skin region reference mask, the input image is an image containing a skin disease focus, and the first segmentation network is used for carrying out segmentation processing by utilizing the skin region reference mask and the input image to obtain a skin region mask;
the focus segmentation module comprises a second segmentation network, and the second segmentation network is used for carrying out segmentation processing by using the skin region mask and the input image to obtain a focus region mask;
the area determining module is used for determining a mask of an area to be filled according to a preset filling proportion and the mask of the focus area;
a completion module, which comprises a generation countermeasure network, wherein the generation countermeasure network is used for generating a post-cure effect map according to the mask of the area to be filled and the input image, the generation countermeasure network comprises a generation network and a discriminator, the input of the generation network comprises the input image and the mask of the area to be filled, and the post-cure effect map is output, and the generation countermeasure network is subjected to countermeasure training in the following way:
acquiring a second training set, wherein the second training set comprises a plurality of original images without focuses, images of the original images after random smearing processing, and a smearing region mask for marking random smearing positions, and the smearing region mask is used for simulating a region mask to be filled;
training a generating network to generate a compensation graph according to the smearing region mask and the image subjected to random smearing processing;
a training discriminator judges the authenticity score of each pixel in the compensation image according to the corresponding original image;
the parameters of the generating network are optimized based on the L1 loss of the complementary map and the corresponding original image and the authenticity score of the complementary map.
2. The system of claim 1, wherein the thresholding unit generates a YCrCb color space image based on the input image and thresholds the chroma channels in the YCrCb color space image according to a predetermined threshold range of segmentation, resulting in the skin region reference mask.
3. The system of claim 2, wherein the first split network is a U-shaped split network, and wherein the first split network is iteratively trained by:
acquiring a first training set and a skin region reference mask, wherein the first training set comprises a plurality of image samples and first labels corresponding to the image samples, the first labels are used for indicating whether corresponding pixels in the image samples corresponding to the first labels belong to skin regions, each image sample corresponds to one skin region reference mask, and the skin region reference mask is a reference value obtained in a threshold segmentation mode and used for indicating whether the corresponding pixels in the image samples belong to the skin regions;
training a first segmentation network to perform segmentation processing by using the image sample and a skin area reference mask corresponding to the image sample, and outputting a skin area mask;
and calculating a first cross entropy loss value based on the output skin area mask and the corresponding first label, and updating the parameters of the first segmentation network according to the first cross entropy loss value.
4. The system of claim 3, wherein the second segmentation network employs a segmentation model combining a Resnet network and a U-shaped segmentation network, and wherein the second segmentation network is iteratively trained by:
acquiring a first training set and a skin region mask of each image sample, wherein the first training set further comprises second labels corresponding to the plurality of image samples, and the second labels are used for indicating whether corresponding pixels in the image samples corresponding to the second labels belong to a lesion region;
training a second segmentation network to perform segmentation processing by using the image sample and a skin region mask corresponding to the image sample, and outputting a focus region mask;
and calculating a second cross entropy loss value based on the output focus area mask and the corresponding second label, and updating the parameters of the second segmentation network according to the second cross entropy loss value.
5. The system of claim 4, wherein the calculating of the second cross entropy loss value based on the output lesion area mask and the corresponding second label ignores cross entropy loss for non-skin areas using the skin area mask.
6. The system of claim 1, wherein the region determination module is configured to:
determining the area of a focus area according to the number of pixel points belonging to the focus area in a focus area mask corresponding to the input image;
and checking the focus area by using corrosion treatment to corrode until the ratio of the corroded focus area to the total area of the focus area reaches a preset filling proportion, and obtaining the mask of the area to be filled.
7. The system of any one of claims 1 to 6, wherein the skin condition is psoriasis, vitiligo or lupus erythematosus.
8. A method of generating a post-healing efficacy map of a skin disorder, comprising:
an image of an affected area of a dermatologic patient is acquired as an input image, and a post-healing effect map corresponding to the input image is generated using the system of any one of claims 1-7.
9. An electronic device, comprising:
one or more processors; and
a memory, wherein the memory is to store executable instructions;
the one or more processors are configured to implement the method of claim 8 via execution of the executable instructions.
CN202111454162.9A 2021-12-02 2021-12-02 System and method for predicting effect after healing of skin disease and electronic equipment Active CN113902827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111454162.9A CN113902827B (en) 2021-12-02 2021-12-02 System and method for predicting effect after healing of skin disease and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111454162.9A CN113902827B (en) 2021-12-02 2021-12-02 System and method for predicting effect after healing of skin disease and electronic equipment

Publications (2)

Publication Number Publication Date
CN113902827A CN113902827A (en) 2022-01-07
CN113902827B true CN113902827B (en) 2022-03-22

Family

ID=79195212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111454162.9A Active CN113902827B (en) 2021-12-02 2021-12-02 System and method for predicting effect after healing of skin disease and electronic equipment

Country Status (1)

Country Link
CN (1) CN113902827B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916334A (en) * 2010-08-16 2010-12-15 清华大学 A kind of skin Forecasting Methodology and prognoses system thereof
CN112132833A (en) * 2020-08-25 2020-12-25 沈阳工业大学 Skin disease image focus segmentation method based on deep convolutional neural network
WO2021179205A1 (en) * 2020-03-11 2021-09-16 深圳先进技术研究院 Medical image segmentation method, medical image segmentation apparatus and terminal device
CN113553909A (en) * 2021-06-23 2021-10-26 北京百度网讯科技有限公司 Model training method for skin detection and skin detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909665B (en) * 2019-11-20 2022-09-02 北京奇艺世纪科技有限公司 Multitask image processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916334A (en) * 2010-08-16 2010-12-15 清华大学 A kind of skin Forecasting Methodology and prognoses system thereof
WO2021179205A1 (en) * 2020-03-11 2021-09-16 深圳先进技术研究院 Medical image segmentation method, medical image segmentation apparatus and terminal device
CN112132833A (en) * 2020-08-25 2020-12-25 沈阳工业大学 Skin disease image focus segmentation method based on deep convolutional neural network
CN113553909A (en) * 2021-06-23 2021-10-26 北京百度网讯科技有限公司 Model training method for skin detection and skin detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep Learning in Skin Disease Image Recognition: A Review;Ling-Fang Li等;《IEEE Access》;20201111;第8卷;全文 *
基于改进全卷积网络的皮肤病变图像分割;杨国亮等;《计算机工程与设计》;20181116(第11期);全文 *

Also Published As

Publication number Publication date
CN113902827A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN109741346B (en) Region-of-interest extraction method, device, equipment and storage medium
Costa et al. Towards adversarial retinal image synthesis
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
JP5830295B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
US10438353B2 (en) Evaluation of an X-ray image of a breast produced during a mammography
CN105809175B (en) Cerebral edema segmentation method and system based on support vector machine algorithm
CN111080654B (en) Image lesion region segmentation method and device and server
KR20230059799A (en) A Connected Machine Learning Model Using Collaborative Training for Lesion Detection
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
CN111462115B (en) Medical image display method and device and computer equipment
CN111815606B (en) Image quality evaluation method, storage medium, and computing device
Zhang et al. Lesion synthesis to improve intracranial hemorrhage detection and classification for CT images
CN112330624A (en) Medical image processing method and device
Pan et al. Prostate segmentation from 3d mri using a two-stage model and variable-input based uncertainty measure
CN113223015A (en) Vascular wall image segmentation method, device, computer equipment and storage medium
CN113576508A (en) Cerebral hemorrhage auxiliary diagnosis system based on neural network
CN113436173A (en) Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
WO2023047118A1 (en) A computer-implemented method of enhancing object detection in a digital image of known underlying structure, and corresponding module, data processing apparatus and computer program
CN107194933A (en) With reference to convolutional neural networks and the brain tumor dividing method and device of fuzzy reasoning
CN112907581A (en) MRI (magnetic resonance imaging) multi-class spinal cord tumor segmentation method based on deep learning
CN116128783A (en) Focus identification method of ultrasonic image and related device
CN113902827B (en) System and method for predicting effect after healing of skin disease and electronic equipment
CN114612669B (en) Method and device for calculating ratio of inflammation to necrosis of medical image
Mihaylova et al. Multistage approach for automatic spleen segmentation in MRI sequences
EP4216156A1 (en) Analysing liver lesions in medical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant