CN112541966A - Face replacement method based on reconstruction and network generation - Google Patents

Face replacement method based on reconstruction and network generation Download PDF

Info

Publication number
CN112541966A
CN112541966A CN202011425921.4A CN202011425921A CN112541966A CN 112541966 A CN112541966 A CN 112541966A CN 202011425921 A CN202011425921 A CN 202011425921A CN 112541966 A CN112541966 A CN 112541966A
Authority
CN
China
Prior art keywords
face
network
reconstruction
image
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011425921.4A
Other languages
Chinese (zh)
Inventor
谭晓阳
蒋珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011425921.4A priority Critical patent/CN112541966A/en
Publication of CN112541966A publication Critical patent/CN112541966A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face replacement method based on reconstruction and network generation, which comprises the following steps: inputting the source face image and the target face image into an azimuth mapping network to carry out 3D face reconstruction, and adjusting the posture of the source face to the posture of the target face; generating human mouth details by using the mouth detail generating network; and removing the whole artifacts and noise of the human face by using the global sharpening generation network, thereby obtaining a human face replacement image result which meets the requirements that human eyes cannot easily identify and an algorithm cannot easily identify. The method can generate a vivid face replacement result on the basis of only one pair of the source face image and the target face image, improves the efficiency of the face replacement method, is suitable for generating a large number of face replacement results with different identities, and provides a large number of antagonistic samples for a face counterfeiting detection algorithm.

Description

Face replacement method based on reconstruction and network generation
Technical Field
The invention belongs to the field of automatic control, and particularly relates to a face replacement method based on reconstruction and network generation.
Background
Face replacement (face swap) is a popular research topic in face editing. This document focuses primarily on high quality face replacement for identity replacement. The high-quality face replacement means that human eyes and a computer discriminator algorithm cannot easily recognize a broken face replacement image, and the face replacement for identity replacement means that the identity of a person in a source portrait is transferred to a person in a target portrait and other characteristics of the target portrait are maintained unchanged.
The mainstream face replacement method at present is a deep fakes method, which mainly has the following ideas: the face replacement based on the convolutional network is a face replacement method based on deep learning, a series of convolutional networks are designed to solve various problems required to be faced when the face is replaced, such as posture, skin color, illumination and the like, but the expression of a face replacement result depends on the task design of the networks, and a forged trace is obvious due to poor design. The other DeepFakes method mainly replaces the face based on the self-coding model with a main one, firstly designs two pairs of self-coder structures to respectively perform self-supervision training on a source portrait set and a target portrait set, and then tests the two decoders in an exchange mode, and has larger improvement in performance compared with the prior method, particularly a DeepFaceLab method introduced with a GAN network can generate more vivid counterfeiting results, but the identity retentivity can be seen from some experimental samples, and the DeepFakes method needs a large amount of source portraits and target portraits for training due to the algorithm design, and needs to respectively train different source identities.
Another face replacement method is face replacement based on 3D face reconstruction, but its forged trace is still more obvious due to its lack of processing for post-3D reconstruction mapping operation.
The method of the deep fakes class is inefficient in generating a face replacement result, and the method of the 3D face reconstruction class lacks effective processing of a map.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the face replacement method based on the reconstruction and generation network is high in quality and efficiency and is suitable for generating a large-scale confrontation sample set.
The technical scheme is as follows: in order to solve the above technical problem, the present invention provides a face replacement method based on reconstruction and network generation, comprising the following steps:
(1) and inputting the source face image and the target face image into a position mapping network to reconstruct the 3D face.
(2) And (3) adjusting the pose of the source face model in the step (1) to the pose of the target face.
(3) And (3) generating the human mouth details by using the mouth details generating network according to the result of the step (2).
(4) And (4) removing the whole artifacts and noise of the human face by using the global sharpening generation network on the result of the step (3), thereby obtaining a human face replacement image result which meets the requirements that human eyes cannot easily identify and an algorithm cannot easily identify.
Further, the face replacement method based on reconstruction and network generation is characterized in that: the training step of generating the network through the mouth details in the step (3) is as follows:
and (3.1) establishing a least square countermeasure generation network to extract the discrimination error.
And (3.2) establishing a VGG network to extract the reconstruction error.
And (3.3) extracting the mouth posture features of the face image output in the step (2).
(3.3) during training, the generator inputs the mouth pose feature of (3.3), the discriminator inputs the output of the generator and the mouth image of the target face image, and the VGG structure inputs the output of the generator and the mouth image of the target face image. Training the network by adopting the discrimination error in (3.1) and the reconstruction error in (3.2), wherein the loss function is as follows:
Figure BDA0002824836140000021
Figure BDA0002824836140000022
where LD is the discriminator Loss, LG is the generator Loss, and the V (x) function represents the output of x through the VGG structure, with the resulting Loss called the Content Loss (Content Loss), and the gamma parameter representing the coefficient of the countervailing Loss (additive Loss).
Further, the face replacement method based on reconstruction and network generation is characterized in that: the training step of generating the network through the global sharpening in the step (4) is as follows:
and (4.1) establishing a two-times countermeasure generation network to extract the discrimination error.
And (4.2) establishing a VGG network to extract the reconstruction error.
And (4.3) establishing a translator network for translating the face image into a corresponding semantic image for extracting semantic errors.
And (4.4) carrying out 3D face reconstruction on the original image, and carrying out the operations of the steps (1), (2) and (3) on the original image to obtain an original image.
During training, the generator input is the original image in (4.4), the discriminator input is the generator output and the original image in (4.4), the translator network input is the generator output and the original image in (4.4), and the VGG network input is the generator output and the original image in (4.4). Training the network by adopting the discrimination error in (4.1), the reconstruction error in (4.2) and the semantic error in (4.3), wherein the loss function is as follows:
Figure BDA0002824836140000031
Figure BDA0002824836140000032
where LD is the discriminator penalty, LG is the generator penalty, and V (x) represents the content penalty of x, P (x) represents the result of x going through PR-Net, and T (x) represents the result of the translator output, the penalty of which is referred to herein as the semantic penalty. The coefficients α, β are coefficients of the countermeasure loss and the reconstruction loss, respectively.
The method is high in efficiency and quality and based on a 3D face Reconstruction method, and organically combines a position mapping network in a 3D face Reconstruction algorithm with a generation countermeasure network for processing the details of a map, and is called a Reconstruction-generation network (EGNet). The new structure can not only generate a face replacement result with the quality equivalent to that of a deep fakes method, but also has the high efficiency characteristic based on a 3D face reconstruction method: high quality face replacement results can be generated with only one pair of source and target face images, and without retraining.
Compared with the prior art, the invention has the advantages that:
1. the deep faces type face replacement method, for example, deep faces lab needs retraining aiming at different source face images, but the method does not need retraining aiming at different source face images, so that the efficiency is greatly improved, and the method is more suitable for generating diversified large-scale data.
2. A3D face reconstruction type face replacement method, such as a Nirkin et al method, is poor in processing of a mapping, and a reconstruction-generation network mode of the method inherits the high efficiency of the 3D face reconstruction type method and simultaneously makes up the defects of mapping processing, particularly loss of details of a mouth.
3. The prior method mainly meets the requirement of human vision, and the method also considers the requirement of algorithm identification, so that the mainstream human face replacement detection method is difficult to identify the result of the method.
Drawings
FIG. 1 is a general flow diagram of a reconstruct-generate network schema in an exemplary embodiment;
FIG. 2 is a schematic diagram of a network for generating details of a mouth in an embodiment;
fig. 3 is a schematic structural diagram of a global sharpening generation network in a specific embodiment.
Detailed Description
The invention is further elucidated with reference to the drawings and the detailed description. The described embodiments of the present invention are only some embodiments of the present invention, and not all embodiments. Based on the embodiments of the present invention, other embodiments obtained by a person of ordinary skill in the art without any creative effort belong to the protection scope of the present invention.
As shown in fig. 1-3, the human face replacement method based on reconstruction and generation network according to the present invention is characterized in that: a simple and efficient reconstruction-generation network mode is designed, and the new method has the advantages of no need of multiple training, low sample size requirement, high generated result quality and the like.
The reestablishing-generating network pattern comprises the steps of:
(1) and inputting the source face image and the target face image into a PR-Net (PRN) orientation mapping network to carry out 3D face reconstruction, and adjusting the posture of the source face model to the posture of the target face.
(2) And (2) generating human Mouth details on the basis of the result of the step (1) by using a Mouth detail generating network (Mouth-GAN), so as to solve the problem of loss of the Mouth details in the 3D face reconstruction method. And (3) training the network by adopting the discrimination error and the reconstruction error in the step (3.1), wherein the loss function is as follows:
Figure BDA0002824836140000041
Figure BDA0002824836140000042
where LD is the discriminator Loss, LG is the generator Loss, and the V (x) function represents the output of x through the VGG structure, with the resulting Loss called the Content Loss (Content Loss), and the gamma parameter representing the coefficient of the countervailing Loss (additive Loss).
(3) And (3) removing the whole human face artifact and noise by using a global sharpening generation network (Sharpen-GAN) on the result of the step (2), thereby obtaining a human face replacement image result which meets the requirements that human eyes cannot be easily identified and an algorithm cannot be easily identified. During training, the generator input is the original image, the discriminator input is the generator output and the original image, the translator network input is the generator output and the original image, and the VGG network input is the generator output and the original image. And training the network by adopting the discrimination error, the reconstruction error and the semantic error, wherein the loss function is as follows:
Figure BDA0002824836140000051
Figure BDA0002824836140000052
where LD is the discriminator penalty, LG is the generator penalty, and V (x) represents the content penalty of x, P (x) represents the result of x going through PR-Net, and T (x) represents the result of the translator output, the penalty of which is referred to herein as the semantic penalty. The coefficients α, β are coefficients of the countermeasure loss and the reconstruction loss, respectively.
The trained model of each module only needs to be tested by using a generator, and the test indexes are as follows: structural Similarity (SSIM), Posi Error (PE), Skin color Error (Skin Error, SE), and Identity Distance (Identity Distance). Next, the evaluation index will be explained. Four indexes which can measure the quality of the face replacement image are summarized: the SSIM, the attitude error, the skin color error and the identity gap comprehensively evaluate the counterfeiting quality of the forged image. SSIM (structural similarity) is a common indicator used to measure image quality. The SSIM index evaluates whether the forged image has distortion or not and the occurrence of noise compared with a target portrait; and the other three are evaluation indexes provided by the text: the attitude error evaluates whether the attitudes of the foreground image and the background image in the forged image are consistent, the skin color error evaluates whether the skin colors of the foreground image and the background image in the forged image are consistent and whether an obvious forged boundary exists, and the identity difference evaluates whether the forged image successfully transfers the identity of the source portrait so as to achieve the aim of replacing the identity. These indices will be used in subsequent experiments to measure the performance of the face replacement algorithm.
If the Pose of the person in the face replacement result is different from the Pose of the target portrait, a visual hole is caused. Therefore, the attitude error is provided to measure the result of the attitude adjustment of the algorithm. Defining the distance D between two figures as the pose error is shown below,
Figure BDA0002824836140000053
wherein I1,I2Two human face images to be measured are respectively, wherein gamma is used for controlling the coefficient in the horizontal direction or the vertical direction, and delta (x) represents that difference calculation is carried out on the x indexes of the two images.
Skin color Error (SE) source and target portraits produce poor results if the Skin color is different without processing. A large number of samples indicate that the difference in skin tone is mainly concentrated in the forehead portion. The pixel F of the sampled face and the pixel T of the forehead portion are compared to obtain a skin color error θ, which is shown below,
Figure BDA0002824836140000054
wherein
Figure BDA0002824836140000055
And
Figure BDA0002824836140000056
the mean value of the face pixel F and the forehead portion pixel T is expressed, and the norm function expresses that they are normalized.
Identity Distance (ID) since the primary purpose of the face replacement method described herein is Identity replacement, the method is considered invalid if the Identity of a person cannot be effectively replaced. An index is therefore needed to measure the retention of the Identity, i.e. the Identity Distance (Identity Distance) between the source portrait and the replacement result. Inspired by a Face-Net method of a Face recognition method, the measurement method carries out regression on each Face image through a depth network, maps the regression to an Euclidean space, and meets the following properties:
if the identity of two figures is the same, then their corresponding points will be as close as possible to
If the identities of two figures are different, then their corresponding points will be as far away as possible
The objective function of the deep network is as follows,
Figure BDA0002824836140000061
wherein theta is a target parameter of the depth network, D is a distribution formed by all face images, D represents the distribution of a certain identity face image, x and y are samples of the face images, and gamma is a parameter for adjusting two emphasis. Therefore, the trained deep network can measure the face replacement result and the identity distance of the source face. In an experiment, the network is a model trained on VGGFace2, and 1.1 is selected as a threshold value through a large number of tests, namely if the identity distance exceeds 1.1, the face replacement algorithm is not considered to be poor in effect, and if the identity distance is smaller than 1.1, the face replacement algorithm is considered to be good in effect.
A comparison of the present process with the mainstream process is shown in table 1.
Table 1 comparison of performance indicators for the method herein and the deep fake method and Nirkin et al; wherein the index labeled "↓" indicates the higher the better, and the index labeled "↓" indicates the lower the better
Figure BDA0002824836140000062
From the above results, the method has a great advantage over the mainstream method in terms of the four indexes of face replacement. Especially SSIM for measuring image quality, the method uses global sharpening to generate network, so that the image quality is improved more. Besides, the posture accuracy, the identity retention and the replacement efficiency of the 3D face reconstruction method are inherited, and the generated image quality is equivalent to that of the deep fakes method. Therefore, the method has certain application value and prospect.

Claims (3)

1. A face replacement method based on reconstruction and network generation is characterized by comprising the following steps:
(1) inputting the source face image and the target face image into a position mapping network to carry out 3D face reconstruction;
(2) adjusting the pose of the source face model in the step (1) to the pose of the target face;
(3) generating human mouth details on the result of the step (2) by using the mouth detail generating network;
(4) and (4) removing the whole artifacts and noise of the human face by using the global sharpening generation network on the result of the step (3), thereby obtaining a human face replacement image result which meets the requirements that human eyes cannot easily identify and an algorithm cannot easily identify.
2. The face replacement method based on reconstruction and generation network as claimed in claim 1, wherein: the specific steps for generating the details of the human mouth in the step (3) are as follows:
(3.1) establishing a least square countermeasure generation network for extracting a discrimination error;
(3.2) establishing a VGG network for extracting reconstruction errors;
(3.3) extracting mouth posture features of the face image output in the step (2);
(3.3) during training, the generator inputs the mouth pose feature of (3.3), the discriminator inputs the output of the generator and the mouth image of the target face image, and the VGG structure inputs the output of the generator and the mouth image of the target face image; training the network by adopting the discrimination error in (3.1) and the reconstruction error in (3.2), wherein the loss function is as follows:
Figure FDA0002824836130000011
Figure FDA0002824836130000012
where LD is the discriminator Loss, LG is the generator Loss, and the V (x) function represents the output of x through the VGG structure, with the resulting Loss called the Content Loss, and the gamma parameter representing the coefficient to combat the additive Loss.
3. The face replacement method based on reconstruction and generation network as claimed in claim 1, wherein: the specific steps of generating the network by using the global sharpening in the step (4) are as follows:
(4.1) establishing a second-product confrontation generation network to extract the discrimination error;
(4.2) establishing a VGG network to extract reconstruction errors;
(4.3) establishing a translator network for translating the face image into a corresponding semantic image for extracting semantic errors;
(4.4) carrying out 3D face reconstruction on the original image to obtain an original image;
during training, the generator input is the original image in (4.4), the discriminator input is the generator output and the original image in (4.4), the translator network input is the generator output and the original image in (4.4), and the VGG network input is the generator output and the original image in (4.4). Training the network by adopting the discrimination error in (4.1), the reconstruction error in (4.2) and the semantic error in (4.3), wherein the loss function is as follows:
Figure FDA0002824836130000021
Figure FDA0002824836130000022
wherein LD is discriminant loss, LG is generator loss, V (x) function represents content loss of x, P (x) function represents result of x passing through PR-Net, T (x) function represents result of translator output, and its generated loss is referred to as semantic loss; the coefficients α, β are coefficients of the countermeasure loss and the reconstruction loss, respectively.
CN202011425921.4A 2020-12-09 2020-12-09 Face replacement method based on reconstruction and network generation Pending CN112541966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011425921.4A CN112541966A (en) 2020-12-09 2020-12-09 Face replacement method based on reconstruction and network generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011425921.4A CN112541966A (en) 2020-12-09 2020-12-09 Face replacement method based on reconstruction and network generation

Publications (1)

Publication Number Publication Date
CN112541966A true CN112541966A (en) 2021-03-23

Family

ID=75019544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011425921.4A Pending CN112541966A (en) 2020-12-09 2020-12-09 Face replacement method based on reconstruction and network generation

Country Status (1)

Country Link
CN (1) CN112541966A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734634A (en) * 2021-03-30 2021-04-30 中国科学院自动化研究所 Face changing method and device, electronic equipment and storage medium
CN113240575A (en) * 2021-05-12 2021-08-10 中国科学技术大学 Face counterfeit video effect enhancement method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349232A (en) * 2019-06-17 2019-10-18 达闼科技(北京)有限公司 Generation method, device, storage medium and the electronic equipment of image
CN110706157A (en) * 2019-09-18 2020-01-17 中国科学技术大学 Face super-resolution reconstruction method for generating confrontation network based on identity prior
CN111861872A (en) * 2020-07-20 2020-10-30 广州市百果园信息技术有限公司 Image face changing method, video face changing method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349232A (en) * 2019-06-17 2019-10-18 达闼科技(北京)有限公司 Generation method, device, storage medium and the electronic equipment of image
CN110706157A (en) * 2019-09-18 2020-01-17 中国科学技术大学 Face super-resolution reconstruction method for generating confrontation network based on identity prior
CN111861872A (en) * 2020-07-20 2020-10-30 广州市百果园信息技术有限公司 Image face changing method, video face changing method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734634A (en) * 2021-03-30 2021-04-30 中国科学院自动化研究所 Face changing method and device, electronic equipment and storage medium
CN113240575A (en) * 2021-05-12 2021-08-10 中国科学技术大学 Face counterfeit video effect enhancement method
CN113240575B (en) * 2021-05-12 2024-05-21 中国科学技术大学 Face fake video effect enhancement method

Similar Documents

Publication Publication Date Title
CN103984958B (en) Cervical cancer cell dividing method and system
CN111126386A (en) Sequence field adaptation method based on counterstudy in scene text recognition
CN108764250B (en) Method for extracting essential image by using convolutional neural network
CN116485785B (en) Surface defect detection method for solar cell
CN110020692A (en) A kind of handwritten form separation and localization method based on block letter template
CN112541966A (en) Face replacement method based on reconstruction and network generation
WO2023125456A1 (en) Multi-level variational autoencoder-based hyperspectral image feature extraction method
CN111353995A (en) Cervical single cell image data generation method based on generation countermeasure network
CN114742758A (en) Cell nucleus classification method in full-field digital slice histopathology picture
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN102236786A (en) Light adaptation human skin colour detection method
Chen et al. Eyes localization algorithm based on prior MTCNN face detection
CN112435237B (en) Skin lesion segmentation method based on data enhancement and depth network
CN109815957A (en) A kind of character recognition method based on color image under complex background
CN109145749B (en) Cross-data-set facial expression recognition model construction and recognition method
CN116993639A (en) Visible light and infrared image fusion method based on structural re-parameterization
Hu et al. Modified image haze removal algorithm based on dark channel prior
CN112884773B (en) Target segmentation model based on target attention consistency under background transformation
Zhang et al. The appropriate image enhancement method for underwater object detection
CN112446345B (en) Low-quality three-dimensional face recognition method, system, equipment and storage medium
CN112330705B (en) Image binarization method based on deep learning semantic segmentation
CN111696117B (en) Loss function weighting method and device based on skeleton perception
CN110610152B (en) Multispectral cloud detection method based on discriminative feature learning unsupervised network
CN114399807A (en) Cross-spectrum face recognition method based on image conversion and monitoring equipment
CN112598662A (en) Image aesthetic description generation method based on hidden information learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination