CN113379716B - Method, device, equipment and storage medium for predicting color spots - Google Patents

Method, device, equipment and storage medium for predicting color spots Download PDF

Info

Publication number
CN113379716B
CN113379716B CN202110707100.8A CN202110707100A CN113379716B CN 113379716 B CN113379716 B CN 113379716B CN 202110707100 A CN202110707100 A CN 202110707100A CN 113379716 B CN113379716 B CN 113379716B
Authority
CN
China
Prior art keywords
image
target
color spot
stain
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110707100.8A
Other languages
Chinese (zh)
Other versions
CN113379716A (en
Inventor
齐子铭
刘兴云
罗家祯
陈福兴
李志阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Yifu Technology Co ltd
Original Assignee
Xiamen Meitu Yifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Yifu Technology Co ltd filed Critical Xiamen Meitu Yifu Technology Co ltd
Priority to CN202110707100.8A priority Critical patent/CN113379716B/en
Publication of CN113379716A publication Critical patent/CN113379716A/en
Priority to JP2022540760A priority patent/JP7385046B2/en
Priority to PCT/CN2021/132553 priority patent/WO2022267327A1/en
Priority to KR1020227022201A priority patent/KR20230001005A/en
Application granted granted Critical
Publication of CN113379716B publication Critical patent/CN113379716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for predicting color spots, and belongs to the technical field of image recognition processing. The method comprises the following steps: acquiring an image to be predicted; inputting an image to be predicted into a color spot prediction model obtained through pre-training for prediction treatment, wherein the color spot prediction model generates an countermeasure network model for full convolution; and obtaining a color spot prediction result graph through the color spot prediction model. The method and the device can be used for predicting the color spot change condition of the facial skin of the user.

Description

Method, device, equipment and storage medium for predicting color spots
Technical Field
The present invention relates to the field of image recognition and generation technologies, and in particular, to a method, an apparatus, a device, and a storage medium for predicting color spots.
Background
With age, the skin of the face of a person is subject to aging and diseases, and in order to prevent these problems, future changes of the face of the person are usually predicted, so that early preventive intervention is required.
In the prior art, the skin changes are usually predicted to be the loose degree, the wrinkle degree and the like of the skin, and the future changes of the color spots cannot be predicted, so that it is highly desirable to provide a method for predicting the future changes of the color spots, so that a user can know the future changes of the color spots of the skin in time.
Disclosure of Invention
The purpose of the present application is to provide a method, a device, an apparatus, and a storage medium for predicting a stain change condition of facial skin of a user.
Embodiments of the present application are implemented as follows:
in one aspect of embodiments of the present application, a method for predicting a stain is provided, the method comprising:
acquiring an image to be predicted;
inputting an image to be predicted into a color spot prediction model obtained through pre-training for prediction treatment, wherein the color spot prediction model generates an countermeasure network model for full convolution;
and obtaining a color spot prediction result graph through the color spot prediction model.
Optionally, before inputting the image to be predicted into the pre-trained stain prediction model for prediction processing, the method further includes:
determining stain information in an image to be predicted, the stain information comprising: the location and type of the stain;
preprocessing the image to be predicted according to the color spot information in the image to be predicted to obtain preprocessed multi-frame images, wherein the multi-frame images respectively comprise: a mottled image and an image identifying a category of mottle;
inputting an image to be predicted into a color spot prediction model obtained by training in advance for prediction processing, wherein the method comprises the following steps:
and inputting the preprocessed multi-frame images into a color spot prediction model obtained by training in advance for prediction processing.
Optionally, before inputting the image to be predicted into the pre-trained stain prediction model for prediction processing, the method further includes:
acquiring a target to-be-processed image, wherein the target to-be-processed image comprises color spot information;
respectively determining a plurality of target channel images according to the target to-be-processed image, wherein the target channel images comprise: removing the multi-channel image of the color spot information and the color spot category channel image;
combining the target channel images and the target noise images, and inputting the combined target channel images and the combined target noise images into a neural network structure to be trained to obtain a trained color spot prediction model; the target noise image is a randomly generated noise image.
Optionally, determining a plurality of target channel images according to the target image to be processed respectively includes:
determining the color spot information in the target to-be-processed image;
and carrying out stain removal processing on the target to-be-processed image according to the stain information in the target to-be-processed image to obtain the target to-be-processed image with the stain information removed.
Optionally, determining a plurality of target channel images according to the target image to be processed respectively includes:
and carrying out color spot detection processing on the target to-be-processed image to obtain a color spot type channel image.
Optionally, performing a stain detection process on the target image to be processed to obtain a stain class channel image, including:
respectively determining the position of each type of color spot information in the target to-be-processed image;
and setting gray information of the type corresponding to the color spot information at the position of the color spot information to obtain a color spot type channel image.
Optionally, before combining the plurality of target channel images and the target noise image and inputting the combined target channel images and the combined target noise image into the neural network structure to be trained to obtain the trained stain prediction model, the method includes:
respectively carrying out normalization processing on the target channel image and the target noise image to obtain a target input image;
combining a plurality of target channel images and target noise images, and inputting the combined target channel images and target noise images into a neural network structure to be trained to obtain a trained color spot prediction model, wherein the method comprises the following steps of:
and inputting the target input image into a neural network structure to be trained to obtain a trained color spot prediction model.
In another aspect of the embodiments of the present application, there is provided a stain prediction apparatus, including: the system comprises an acquisition module, a prediction module and an output module;
the acquisition module is used for acquiring the image to be predicted;
the prediction module is used for inputting the image to be predicted into a pre-trained color spot prediction model to perform prediction processing, and the color spot prediction model generates an countermeasure network model for full convolution;
and the output module is used for obtaining a color spot prediction result graph through the color spot prediction model.
Optionally, the apparatus further comprises: a preprocessing module; the preprocessing module is used for determining color spot information in the image to be predicted, and the color spot information comprises: the location and type of the stain; preprocessing the image to be predicted according to the color spot information in the image to be predicted to obtain preprocessed multi-frame images, wherein the multi-frame images respectively comprise: a mottled image and an image identifying a category of mottle; the prediction module is specifically used for inputting the preprocessed multi-frame images into a color spot prediction model obtained by pre-training to perform prediction processing.
Optionally, the preprocessing module is further configured to acquire a target to-be-processed image, where the target to-be-processed image includes stain information; respectively determining a plurality of target channel images according to the target to-be-processed image, wherein the target channel images comprise: removing the multi-channel image of the color spot information and the color spot category channel image; combining a plurality of target channel images, and inputting the combined target channel images into a neural network structure to be trained to obtain a trained color spot prediction model; the target noise image is a randomly generated noise image.
Optionally, the preprocessing module is further used for determining color spot information in the target image to be processed; and carrying out stain removal processing on the target to-be-processed image according to the stain information in the target to-be-processed image to obtain the target to-be-processed image with the stain information removed.
Optionally, the preprocessing module is further configured to perform a stain detection process on the target to-be-processed image, so as to obtain a stain class channel image.
Optionally, the preprocessing module is specifically configured to determine a position of each type of stain information in the target to-be-processed image respectively; and setting gray information of the type corresponding to the color spot information at the position of the color spot information to obtain a color spot type channel image.
Optionally, the preprocessing module is further configured to normalize the target channel image and the target noise image respectively to obtain a target input image; and inputting the target input image into a neural network structure with training to obtain a trained color spot prediction model.
In another aspect of the embodiments of the present application, there is provided a computer device comprising: the color spot prediction method comprises the steps of a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the color spot prediction method when executing the computer program.
In another aspect of the embodiments of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the stain prediction method described above.
The beneficial effects of the embodiment of the application include:
in the method, the device, the equipment and the storage medium for predicting the color spots, provided by the embodiment of the application, the image to be predicted can be obtained; inputting an image to be predicted into a color spot prediction model obtained through pre-training for prediction treatment, wherein the color spot prediction model generates an countermeasure network model for full convolution; and obtaining a color spot prediction result graph through the color spot prediction model. The full convolution generation countermeasure network structure is used as a color spot prediction module, so that the future change condition of the skin color spots can be predicted, and further, a user can know the change trend of the skin color spots in time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a stain prediction method according to an embodiment of the present disclosure;
FIG. 2 is a second flowchart of a stain prediction method according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a stain prediction method according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a stain prediction method according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a stain prediction method according to an embodiment of the present disclosure;
fig. 6 is a flowchart of a stain prediction method according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a stain prediction device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present application, it should be noted that the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
It should be noted that, in the prior art, there is no prediction method for the color spot change of the human facial skin, and in the embodiment of the present application, the color spot change of the human facial skin in a period of time in the future can be realized, so that the user can timely learn the color spot change of the user's own face.
The following specifically explains the implementation procedure of the stain prediction method provided in the embodiment of the present application.
Fig. 1 is a flowchart of a stain prediction method according to an embodiment of the present application, referring to fig. 1, the method includes:
s110: and acquiring an image to be predicted.
Alternatively, the image to be predicted may be any image corresponding to the skin that needs to be subjected to stain prediction, such as a photograph of a user, an image of a face, or the like, and the image may be an image with a size of a preset size after cutting.
Alternatively, the execution subject of the method may be a related program on a computer device, such as: the preset program of the skin predictor, a certain function of the electronic face washing apparatus, etc., are not particularly limited herein, and may be set according to actual needs.
Alternatively, the image to be predicted may be transmitted to the computer device by another device, or may be acquired by the computer device through shooting by a shooting device or the like, which is not particularly limited.
S120: and inputting the image to be predicted into a color spot prediction model obtained by pre-training to perform prediction processing.
Wherein the stain prediction model generates an countermeasure network model for the full convolution.
Optionally, after determining the image to be predicted, the image to be predicted may be input into a pre-trained stain prediction model for prediction processing, where the stain prediction model may be a full convolution generated countermeasure network model obtained after the pre-training.
The stain prediction model may be pre-trained by the computer device, or may be sent to the computer device by other electronic devices, which is not limited herein.
Alternatively, the full convolution generated countermeasure network Model may be a generated countermeasure network composed of a plurality of convolution neural networks, wherein the generated countermeasure network may be a network Model that produces a fairly good output through a mutual game learning of a generated Model (generated Model) and a discriminant Model (Discriminative Model).
S130: and obtaining a color spot prediction result graph through the color spot prediction model.
Optionally, after the predicting process is performed by the stain predicting model, a stain predicting result diagram may be obtained, where the stain predicting result diagram may display a change condition of the stain after a period of time in the face skin in the image to be predicted is future, and the specific time may be set according to the actual requirement, which is not limited herein.
Optionally, the stain prediction result map may include a plurality of stain prediction result maps respectively used for representing the changes of stains after a period of time in the future of the facial skin in the image to be predicted after different periods of time.
In the method for predicting color spots, provided by the embodiment of the application, an image to be predicted can be obtained; inputting an image to be predicted into a color spot prediction model obtained through pre-training for prediction treatment, wherein the color spot prediction model generates an countermeasure network model for full convolution; and obtaining a color spot prediction result graph through the color spot prediction model. The full convolution generation countermeasure network structure is used as a color spot prediction module, so that the future change condition of the skin color spots can be predicted, and further, a user can know the change trend of the skin color spots in time.
Another embodiment of the stain prediction method provided in the embodiments of the present application is specifically explained below.
Fig. 2 is a second flowchart of the stain prediction method provided in the embodiment of the present application, referring to fig. 2, before inputting an image to be predicted into a stain prediction model obtained by training in advance for prediction, the method further includes:
s210: and determining the color spot information in the image to be predicted.
Wherein, the mottle information includes: the location and type of the stain.
Alternatively, the stain information may be stain information on the skin in the image to be predicted, for example: may include: the position, category, etc. of each stain on the skin in the image to be predicted, the position of each stain may be recorded in the form of a range of coordinates, and the category of the stain may be recorded in a labeled manner.
Alternatively, a preset stain identification algorithm may be used to obtain and determine stain information in the image to be predicted.
S220: and preprocessing the image to be predicted according to the color spot information in the image to be predicted, so as to obtain a preprocessed multi-frame image.
Wherein, the multiframe image includes respectively: the non-colored image and the colored image identified.
Optionally, the preprocessing of the image to be predicted may include a stain removal process and a stain determination process, where the image without stain may be obtained through the stain removal process, that is, the image to be predicted without stain information; the stain determination process can result in an image in which the stain categories are identified, and different stain categories can be represented by different gray values in the image.
Inputting an image to be predicted into a color spot prediction model obtained by training in advance for prediction processing, wherein the method comprises the following steps:
s230: and inputting the preprocessed multi-frame images into a color spot prediction model obtained by training in advance for prediction processing.
Optionally, after determining the plurality of preprocessed multi-frame images respectively, the images may be combined and input together into a pre-trained stain prediction model for prediction, so as to obtain a corresponding prediction result.
A further embodiment of the stain prediction method provided in the embodiments of the present application is explained in detail below.
Fig. 3 is a flowchart of a stain prediction method provided in an embodiment of the present application, referring to fig. 3, before inputting an image to be predicted into a stain prediction model obtained by training in advance for prediction, the method further includes:
s310: and acquiring a target image to be processed.
The target to-be-processed image comprises color spot information.
Alternatively, the target image to be processed may be a sample image for training the stain prediction model, the sample image having skin thereon including stain information.
Alternatively, the target image to be processed may be a large number of sample images collected in advance, for example, an image of a facial stain or the like downloaded through a network, and is not particularly limited herein.
S320: and respectively determining a plurality of target channel images according to the target to-be-processed image.
Wherein the target channel image comprises: the multi-channel image of the stain information and the stain class channel image are removed.
Optionally, the target to-be-processed image may be processed respectively to obtain multiple target channel images, where the multi-channel image for removing the stain information may be obtained by performing the stain removing process on the target to-be-processed image, which is the same as the method for obtaining the image without the stain. The stain type channel image may be obtained by identifying stains in the target image to be processed, which is the same as the method for obtaining the image marked with the stain type.
S330: and combining the target channel images and the target noise images, and inputting the combined target channel images and the combined target noise images into a neural network structure to be trained to obtain a trained color spot prediction model.
Wherein the target noise image is a randomly generated noise image.
Optionally, after the multiple target channel images are acquired, the target channel images and the target noise images generated in advance may be combined and input into the neural network structure to be trained together for training, and the stain prediction model may be obtained after training is completed.
A further embodiment of the stain prediction method provided in the embodiments of the present application is explained in detail below.
Fig. 4 is a flowchart of a stain prediction method provided in an embodiment of the present application, referring to fig. 4, a plurality of target channel images are respectively determined according to a target image to be processed, including:
s410: and determining the color spot information in the target to-be-processed image.
Optionally, after determining the target image to be processed, the stain information may be determined, and specifically, the stain information may be determined by using the foregoing stain recognition method.
S420: and carrying out stain removal processing on the target to-be-processed image according to the stain information in the target to-be-processed image to obtain the target to-be-processed image with the stain information removed.
Alternatively, after the above-described stain information is obtained, a stain removal process may be performed based on the stain information. And removing all the color spot information in the target to-be-processed image to obtain the target to-be-processed image, wherein the target to-be-processed image does not have the color spot information.
Optionally, after the target to-be-processed image is obtained, channel processing can be performed, color channels of three colors of red, green and blue are respectively obtained, and a red channel image for removing the color spot information, a green channel image for removing the color spot information and a blue channel image for removing the color spot information can be obtained.
Optionally, determining a plurality of target channel images according to the target image to be processed respectively includes:
and carrying out color spot detection processing on the target to-be-processed image to obtain a color spot type channel image.
Optionally, after the target to-be-processed image is obtained, the image may be subjected to a stain detection process, so as to further obtain a stain type channel image, which specifically includes the following steps:
fig. 5 is a flowchart fifth of a stain prediction method provided in an embodiment of the present application, referring to fig. 5, performing a stain detection process on a target to-be-processed image to obtain a stain type channel image, where the stain type channel image includes:
s510: and respectively determining the position of each type of color spot information in the target to-be-processed image.
Alternatively, the location of each type of stain in the target image to be processed may be determined by means of stain identification.
S520: and setting gray information of the type corresponding to the color spot information at the position of the color spot information to obtain a color spot type channel image.
Optionally, after determining the position of each type of color spot, a corresponding gray value may be set at the corresponding position, different gray values may be used to represent different types of color spots, the specific position and range where the gray value is located may represent the position of the color spot and the size of the color spot, and after determining the gray information corresponding to each type of color spot information in the image and completing the setting, the color spot type channel image may be obtained, and specifically, the channel image with different color spot types may be represented by different gray levels.
A further embodiment of the stain prediction method provided in the embodiments of the present application is explained in detail below.
Fig. 6 is a flowchart of a stain prediction method provided in an embodiment of the present application, referring to fig. 6, before merging a plurality of target channel images and target noise images, and inputting the merged images into a neural network structure to be trained to obtain a trained stain prediction model, the method includes:
s610: and respectively carrying out normalization processing on the target channel image and the target noise image to obtain a target input image.
Optionally, the above-mentioned multiple target channel images and the target noise images are combined and input together into the trained stain prediction model obtained in the neural network structure to be trained, where the target channel images include the red channel image with stain information removed, the green channel image with stain information removed, the blue channel image with stain information removed, and the stain class channel image, and these four types of images can be combined with the target noise images to obtain five channel images, and input together into the trained neural network structure to obtain the trained stain prediction model.
Optionally, in the normalization processing, specifically, the red channel image with the stain information removed, the green channel image with the stain information removed, the blue channel image with the stain information removed, and the target noise image are normalized to be within a (-1, 1) interval, and the stain type channel image is normalized to be within a (0, 1) interval, where a specific calculation formula is as follows:
Img (-1,1) =(Img*2/255)-1;
wherein, img is a single-channel image in the interval of 0-255, img (-1,1) Is a graph of normalized (-1, 1) intervals.
ClsMask (0,1) =ClsMask/255;
Wherein, clsMask is a single-channel image with a section of 0-255, and ClsMask (0,1) Is a graph of normalized (0, 1) intervals.
Combining a plurality of target channel images and target noise images, and inputting the combined target channel images and target noise images into a neural network structure to be trained to obtain a trained color spot prediction model, wherein the method comprises the following steps of:
s620: and inputting the target input image into a neural network structure with training to obtain a trained color spot prediction model.
Optionally, after the target input image is obtained, the target input image may be input into the neural network structure to obtain a trained stain prediction model.
The specific structure of the stain prediction model employed in the embodiments of the present application is specifically explained below:
the model adopts an encoding-decoding structure, the up-sampling of the decoding part adopts the combination of nearest neighbor up-sampling and convolution layer, the activation function of the output layer is Tanh, and the specific structural relation can be shown in table 1.
TABLE 1
Wherein, leakyrelu is a conventional activation function in deep learning, negotivelope is a configuration parameter in the activation function, kh is the height of a convolution kernel, kw is the width of the convolution kernel, padding is a pixel value for expanding a feature map during convolution operation, stride is a step length of convolution, group is the number of convolution kernel groups, and scale_factor and Mode are parameters of an up-sampling layer. Where scale_factor represents up-sampling to a size of 2 times, mode=nearest represents up-sampling in the nearest neighbor.
Optionally, the model further comprises a discrimination network part for discriminating true and false images with different resolutions respectively. In the embodiment of the application, 3 scale discriminators can be used to discriminate 512x512, 256x256 and 128x128 resolution images respectively. For images of different resolutions, this can be obtained by downsampling.
Optionally, the model may set 20000 samples during training, and multiple gains may be performed on the image of each sample, such as flipping, rotation, translation, affine transformation, exposure, contrast adjustment, blurring, etc., to achieve an improvement in robustness.
Optionally, the optimization algorithm of the network training uses Adam algorithm to generate the learning rate of the network to be 0.0002, and judges the learning rate of the network to be 0.0001.
Optionally, the loss function of the model is specifically calculated as follows:
L=L 1 +L 2 +L vgg +L adv
L 1 =|generate-GT|;
L 2 =||generate-GT||;
Lvgg=∑ i ||L perceptual (generate)-L perceptual (GT)||;
wherein the generation represents the output of the network; GT is an image generated by the target. L (L) 1 、L 2 Are all loss functions, L vgg To perceive the loss function, L adv Generating a pair-wise loss-resistant function. Lperseptial denotes the perceived loss, which refers to: inputting the output (generation map) generation and GT of the network into another network, extracting the characteristic tensors of the corresponding layers, and calculating the difference between the characteristic tensors, wherein i is the ith sample.
The following describes a device, equipment, a storage medium, etc. corresponding to the stain prediction method provided by the present application, and specific implementation processes and technical effects of the method are referred to above, and are not described in detail below.
Fig. 7 is a schematic structural diagram of a stain prediction apparatus according to an embodiment of the present application, referring to fig. 7, the apparatus includes: an acquisition module 100, a prediction module 200 and an output module 300;
an acquisition module 100, configured to acquire an image to be predicted;
the prediction module 200 is configured to input an image to be predicted into a stain prediction model obtained by training in advance for performing prediction processing, where the stain prediction model generates an countermeasure network model for full convolution;
and the output module 300 is used for obtaining a color spot prediction result graph through the color spot prediction model.
Optionally, the apparatus further comprises: a preprocessing module 400; the preprocessing module 400 is configured to determine stain information in an image to be predicted, where the stain information includes: the location and type of the stain; preprocessing the image to be predicted according to the color spot information in the image to be predicted to obtain preprocessed multi-frame images, wherein the multi-frame images respectively comprise: a mottled image and an image identifying a category of mottle; the prediction module 200 is specifically configured to input the preprocessed multi-frame image into a pre-trained color spot prediction model for performing prediction processing.
Optionally, the preprocessing module 400 is further configured to acquire a target to-be-processed image, where the target to-be-processed image includes stain information; respectively determining a plurality of target channel images according to the target to-be-processed image, wherein the target channel images comprise: removing the multi-channel image of the color spot information and the color spot category channel image; combining the target channel images and the target noise images, and inputting the combined target channel images and the combined target noise images into a neural network structure to be trained to obtain a trained color spot prediction model; the target noise image is a randomly generated noise image.
Optionally, the preprocessing module 400 is further configured to determine stain information in the target image to be processed; according to the color spot information in the target to-be-processed image, performing color spot removal processing on the target to-be-processed image to obtain the target to-be-processed image with the color spot information removed; and carrying out channel processing on the target to-be-processed image from which the color spot information is removed, so as to obtain a multi-channel image from which the color spot information is removed.
Optionally, the preprocessing module 400 is further configured to perform a stain detection process on the target to-be-processed image, so as to obtain a stain category channel image.
Optionally, the preprocessing module 400 is specifically configured to determine the position of each type of stain information in the target to-be-processed image respectively; and setting gray information of the type corresponding to the color spot information at the position of the color spot information to obtain a color spot type channel image.
Optionally, the preprocessing module 400 is further configured to normalize the target channel image and the target noise image respectively to obtain a target input image; and inputting the target input image into a neural network structure with training to obtain a trained color spot prediction model.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASICs), or one or more microprocessors, or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGAs), etc. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 8 is a schematic structural diagram of a computer device provided in an embodiment of the present application, referring to fig. 8, the computer device includes: memory 500 and processor 600, and the computer program executable on processor 600 is stored in memory 500, and when the processor 600 executes the computer program, the steps of the stain prediction method described above are realized.
In another aspect of the embodiments of the present application, there is also provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the above-described stain prediction method.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform part of the steps of the methods of the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes or substitutions are covered by the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (9)

1. A method of stain prediction, the method comprising:
acquiring an image to be predicted;
inputting the image to be predicted into a pre-trained color spot prediction model for prediction processing, wherein the color spot prediction model generates an countermeasure network model for full convolution;
obtaining a color spot prediction result graph through the color spot prediction model;
before the image to be predicted is input into the pre-trained color spot prediction model for prediction processing, the method further comprises the following steps:
acquiring a target to-be-processed image, wherein the target to-be-processed image comprises color spot information;
respectively determining a plurality of target channel images according to the target to-be-processed image, wherein the target channel images comprise: removing the multi-channel image of the color spot information and the color spot category channel image;
combining the target channel images and the target noise images, and inputting the combined target channel images and the combined target noise images into a neural network structure to be trained to obtain a trained color spot prediction model; the target noise image is a randomly generated noise image;
the target to-be-processed image is a sample image used for training the color spot prediction model, the sample image is provided with skin, and the skin comprises color spot information;
the stain information includes: the location and type of the stain;
the multi-channel image for removing stain information includes: a red channel image from which stain information is removed, a green channel image from which stain information is removed, and a blue channel image from which stain information is removed.
2. The method according to claim 1, wherein before the inputting the image to be predicted into a pre-trained stain prediction model for prediction processing, the method further comprises:
determining the color spot information in the image to be predicted;
preprocessing the image to be predicted according to the color spot information in the image to be predicted to obtain preprocessed images, wherein the preprocessed images respectively comprise: a mottled image and an image identifying a category of mottle;
the step of inputting the image to be predicted into a pre-trained color spot prediction model for prediction processing comprises the following steps:
and inputting the preprocessed multiple images into a pre-trained color spot prediction model to perform prediction processing.
3. The method of claim 1, wherein the determining a plurality of target channel images from the target image to be processed, respectively, comprises:
determining the color spot information in the target to-be-processed image;
and carrying out stain removal processing on the target to-be-processed image according to the stain information in the target to-be-processed image to obtain the target to-be-processed image with the stain information removed.
4. The method of claim 1, wherein the determining a plurality of target channel images from the target image to be processed, respectively, comprises:
and carrying out color spot detection processing on the target to-be-processed image to obtain the color spot type channel image.
5. The method of claim 4, wherein performing a stain detection process on the target image to be processed to obtain the stain class channel image comprises:
respectively determining the position of each type of color spot information in the target to-be-processed image;
and setting gray information of the type corresponding to the color spot information at the position of the color spot information to obtain the color spot type channel image.
6. The method of claim 1, wherein the merging of the plurality of target channel images and target noise images, before inputting the plurality of target channel images and target noise images together into a neural network structure to be trained to obtain a trained stain prediction model, comprises:
respectively carrying out normalization processing on the target channel image and the target noise image to obtain a target input image;
the step of respectively inputting the target channel images and the target noise images into the neural network structure to be trained to obtain a trained color spot prediction model comprises the following steps:
and inputting the target input image into a neural network structure to be trained to obtain a trained color spot prediction model.
7. A stain prediction device, the device comprising: the system comprises an acquisition module, a prediction module and an output module;
the acquisition module is used for acquiring an image to be predicted;
the prediction module is used for inputting the image to be predicted into a pre-trained color spot prediction model for prediction processing, and the color spot prediction model generates an countermeasure network model for full convolution;
the output module is used for obtaining a color spot prediction result graph through the color spot prediction model;
the device also comprises a preprocessing module, wherein the preprocessing module is used for acquiring a target to-be-processed image, and the target to-be-processed image comprises color spot information; respectively determining a plurality of target channel images according to the target to-be-processed image, wherein the target channel images comprise: removing the multi-channel image of the color spot information and the color spot category channel image; combining the target channel images and the target noise images, and inputting the combined target channel images and the combined target noise images into a neural network structure to be trained to obtain a trained color spot prediction model; the target noise image is a randomly generated noise image;
the target to-be-processed image is a sample image used for training the color spot prediction model, the sample image is provided with skin, and the skin comprises color spot information;
the stain information includes: the location and type of the stain;
the multi-channel image for removing stain information includes: a red channel image from which stain information is removed, a green channel image from which stain information is removed, and a blue channel image from which stain information is removed.
8. A computer device, comprising: memory, a processor, in which a computer program is stored which is executable on the processor, when executing the computer program, realizing the steps of the method of any of the preceding claims 1 to 6.
9. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of claims 1 to 6.
CN202110707100.8A 2021-06-24 2021-06-24 Method, device, equipment and storage medium for predicting color spots Active CN113379716B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202110707100.8A CN113379716B (en) 2021-06-24 2021-06-24 Method, device, equipment and storage medium for predicting color spots
JP2022540760A JP7385046B2 (en) 2021-06-24 2021-11-23 Color spot prediction method, device, equipment and storage medium
PCT/CN2021/132553 WO2022267327A1 (en) 2021-06-24 2021-11-23 Pigmentation prediction method and apparatus, and device and storage medium
KR1020227022201A KR20230001005A (en) 2021-06-24 2021-11-23 Spot prediction methods, devices, equipment and storage media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110707100.8A CN113379716B (en) 2021-06-24 2021-06-24 Method, device, equipment and storage medium for predicting color spots

Publications (2)

Publication Number Publication Date
CN113379716A CN113379716A (en) 2021-09-10
CN113379716B true CN113379716B (en) 2023-12-29

Family

ID=77578969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110707100.8A Active CN113379716B (en) 2021-06-24 2021-06-24 Method, device, equipment and storage medium for predicting color spots

Country Status (4)

Country Link
JP (1) JP7385046B2 (en)
KR (1) KR20230001005A (en)
CN (1) CN113379716B (en)
WO (1) WO2022267327A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379716B (en) * 2021-06-24 2023-12-29 厦门美图宜肤科技有限公司 Method, device, equipment and storage medium for predicting color spots

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916334A (en) * 2010-08-16 2010-12-15 清华大学 A kind of skin Forecasting Methodology and prognoses system thereof
CN110473177A (en) * 2019-07-30 2019-11-19 上海媚测信息科技有限公司 Skin pigment distribution forecasting method, image processing system and storage medium
CN112464885A (en) * 2020-12-14 2021-03-09 上海交通大学 Image processing system for future change of facial color spots based on machine learning
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN112614140A (en) * 2020-12-17 2021-04-06 深圳数联天下智能科技有限公司 Method and related device for training color spot detection model
CN112950569A (en) * 2021-02-25 2021-06-11 平安科技(深圳)有限公司 Melanoma image recognition method and device, computer equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2001245751A1 (en) * 2000-02-18 2001-08-27 Robert Kenet Method and device for skin cancer screening
JP4379603B2 (en) * 2004-09-02 2009-12-09 富士ゼロックス株式会社 Color processing apparatus, color processing program, and storage medium
JP2012053813A (en) 2010-09-03 2012-03-15 Dainippon Printing Co Ltd Person attribute estimation device, person attribute estimation method and program
JP5950486B1 (en) 2015-04-01 2016-07-13 みずほ情報総研株式会社 Aging prediction system, aging prediction method, and aging prediction program
US10621771B2 (en) 2017-03-21 2020-04-14 The Procter & Gamble Company Methods for age appearance simulation
CN110163813B (en) * 2019-04-16 2022-02-01 中国科学院深圳先进技术研究院 Image rain removing method and device, readable storage medium and terminal equipment
CN112883756B (en) 2019-11-29 2023-09-15 哈尔滨工业大学(深圳) Age-converted face image generation method and countermeasure network model generation method
CN111429416B (en) * 2020-03-19 2023-10-13 深圳数联天下智能科技有限公司 Facial pigment spot recognition method and device and electronic equipment
CN113379716B (en) * 2021-06-24 2023-12-29 厦门美图宜肤科技有限公司 Method, device, equipment and storage medium for predicting color spots

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916334A (en) * 2010-08-16 2010-12-15 清华大学 A kind of skin Forecasting Methodology and prognoses system thereof
CN110473177A (en) * 2019-07-30 2019-11-19 上海媚测信息科技有限公司 Skin pigment distribution forecasting method, image processing system and storage medium
CN112508812A (en) * 2020-12-01 2021-03-16 厦门美图之家科技有限公司 Image color cast correction method, model training method, device and equipment
CN112464885A (en) * 2020-12-14 2021-03-09 上海交通大学 Image processing system for future change of facial color spots based on machine learning
CN112614140A (en) * 2020-12-17 2021-04-06 深圳数联天下智能科技有限公司 Method and related device for training color spot detection model
CN112950569A (en) * 2021-02-25 2021-06-11 平安科技(深圳)有限公司 Melanoma image recognition method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Skin Cancer Prediction and Diagnosis Using Convolutional Neural Network (CNN) Deep Learning Algorithm;Mousannif, H 等;Innovations in Smart Cities Applications. Proceedings of the 5th International Conference on Smart;第4卷;第558-567页 *
基于卷积神经网络的迁移学习对皮肤癌的预测研究;董青青;中国优秀硕士学位论文全文数据库 医药卫生科技辑(第04期);全文 *

Also Published As

Publication number Publication date
KR20230001005A (en) 2023-01-03
JP7385046B2 (en) 2023-11-21
CN113379716A (en) 2021-09-10
JP2023534328A (en) 2023-08-09
WO2022267327A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
Fernandes et al. Predicting heart rate variations of deepfake videos using neural ode
Yang et al. MTD-Net: Learning to detect deepfakes images by multi-scale texture difference
JP4755202B2 (en) Face feature detection method
US20070122005A1 (en) Image authentication apparatus
CN110033040B (en) Flame identification method, system, medium and equipment
Wallis et al. Image correlates of crowding in natural scenes
CN113393446B (en) Convolutional neural network medical image key point detection method based on attention mechanism
CN111986202B (en) Glaucoma auxiliary diagnosis device, method and storage medium
CN112464690A (en) Living body identification method, living body identification device, electronic equipment and readable storage medium
CN111612756B (en) Coronary artery specificity calcification detection method and device
US20230377097A1 (en) Laparoscopic image smoke removal method based on generative adversarial network
CN114287878A (en) Diabetic retinopathy focus image identification method based on attention model
CN110969613A (en) Intelligent pulmonary tuberculosis identification method and system with image sign interpretation
CN112990016B (en) Expression feature extraction method and device, computer equipment and storage medium
Lewis et al. How we detect a face: A survey of psychological evidence
CN111401134A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113379716B (en) Method, device, equipment and storage medium for predicting color spots
GB2613767A (en) Computer-implemented method of enhancing object detection in a digital image of known underlying structure, and corresponding module, data processing appara
CN113240655A (en) Method, storage medium and device for automatically detecting type of fundus image
CN112634246A (en) Oral cavity image identification method and related equipment
Zhao et al. Automated detection of vessel abnormalities on fluorescein angiogram in malarial retinopathy
CN116912604B (en) Model training method, image recognition device and computer storage medium
Queiroz et al. Endoscopy image restoration: A study of the kernel estimation from specular highlights
CN116152079A (en) Image processing method and image processing model training method
CN111626972B (en) CT image reconstruction method, model training method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210930

Address after: 361100 568, No. 942, tonglong Second Road, torch high tech Zone (Xiang'an) Industrial Zone, Xiang'an District, Xiamen City, Fujian Province

Applicant after: Xiamen Meitu Yifu Technology Co.,Ltd.

Address before: B1f-089, Zone C, Huaxun building, software park, torch high tech Zone, Xiamen City, Fujian Province

Applicant before: XIAMEN HOME MEITU TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant