WO2021042774A1 - 图像恢复方法、图像恢复网络训练方法、装置和存储介质 - Google Patents

图像恢复方法、图像恢复网络训练方法、装置和存储介质 Download PDF

Info

Publication number
WO2021042774A1
WO2021042774A1 PCT/CN2020/093142 CN2020093142W WO2021042774A1 WO 2021042774 A1 WO2021042774 A1 WO 2021042774A1 CN 2020093142 W CN2020093142 W CN 2020093142W WO 2021042774 A1 WO2021042774 A1 WO 2021042774A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
quality
training
low
network
Prior art date
Application number
PCT/CN2020/093142
Other languages
English (en)
French (fr)
Inventor
贾旭
戴恩炎
王云鹤
许春景
刘健庄
田奇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021042774A1 publication Critical patent/WO2021042774A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This application relates to the field of computer vision, and more specifically, to an image restoration method, image restoration network training method, device, and storage medium.
  • Computer vision is an inseparable part of various intelligent/autonomous systems in various application fields, such as manufacturing, inspection, document analysis, medical diagnosis, and military. It is about how to use cameras/video cameras and computers to obtain What we need is the knowledge of the data and information of the subject. Vividly speaking, it is to install eyes (camera/camcorder) and brain (algorithm) on the computer to replace the human eye to identify, track and measure the target, so that the computer can perceive the environment. Because perception can be seen as extracting information from sensory signals, computer vision can also be seen as a science that studies how to make artificial systems "perceive" from images or multi-dimensional data.
  • computer vision is to use various imaging systems to replace the visual organs to obtain input information, and then the computer replaces the brain to complete the processing and interpretation of the input information.
  • the ultimate research goal of computer vision is to enable computers to observe and understand the world through vision like humans, and have the ability to adapt to the environment autonomously.
  • the traditional method for image restoration mainly includes the following processes: first, obtain some high-quality images, and add blur or noise to these high-quality images to obtain a composite low-quality image paired with the original high-quality image; secondly, according to these pairings
  • the neural network model is trained on the high-quality images and synthetic low-quality images of the selected neural network to obtain the trained neural network model; finally, the trained neural network model is used to perform image restoration on the input low-quality image to obtain the corresponding high-quality image.
  • This application provides an image restoration network training method, image restoration method, device, and storage medium, so as to train an image restoration network with better image restoration performance.
  • a method for training an image restoration network includes: obtaining a plurality of training image pairs; training the image restoration network according to the plurality of training images until the image restoration performance of the image restoration network meets a preset Claim.
  • each training image pair in the above multiple training image pairs includes a real high-quality image and an approximate real low-quality image
  • the approximate real low-quality image in each training image pair is for each training image pair.
  • the real high-quality image is processed, and the difference between the image sharpness of the approximate real low-quality image in each training image pair and the image sharpness of the existing real low-quality image is within a preset range.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, and the image to be restored is an image processed by the image restoration network after the training is completed. That is to say, the above-mentioned existing real low-quality image and the image to be restored that are processed by the image restoration network after the training is completed during subsequent image restoration are collected by using the same device.
  • the above-mentioned image restoration network may also be referred to as an image restoration model, and the image restoration network may be a neural network.
  • the training image pairs can be used batch by batch to train the image restoration network. Each batch can use one or more training image pairs.
  • the same type of equipment mentioned above may refer to equipment of exactly the same model.
  • cameras of the same model For example, cameras of the same model, terminal devices of the same model, cameras of the same model, and so on.
  • the approximate real low-quality images contained in the training image pair are obtained by synthesizing and realizing real high-quality images, that is to say, the approximate real low-quality images and the real low-quality images contained in the training image pairs
  • the images are relatively close. Therefore, according to the training of the image restoration network based on multiple training images proposed in this application, an image restoration network with better image restoration effects for real low-quality images can be obtained. In turn, the subsequent image restoration based on the trained image restoration network can have a better image restoration effect.
  • the image restoration network is trained based on multiple training images until the image restoration performance of the image restoration network meets the preset requirements, including:
  • Step 1 Initialize the network parameters of the image restoration network to obtain the initial values of the network parameters of the image restoration network
  • Step 2 Input the approximate real low-quality image of at least one of the multiple training image pairs into the image restoration network for processing, so as to obtain at least one restored high-quality image;
  • Step 3 Determine the function value of the loss function according to the difference between the at least one restored high-quality image and the real high-quality image in the at least one training image pair;
  • Step 4 Update the network parameters of the image restoration network according to the function value of the loss function
  • the network parameters of the image restoration network can be randomly initialized, so that the network parameters of the image restoration network can randomly take some numerical values.
  • the loss function in step 3 above can also be referred to as an image loss function.
  • the function value of the image loss function is also greater, and when the at least one restored high-quality image and at least one training image The smaller the difference between the real high-quality images in the alignment, the smaller the function value of the image loss function.
  • the network parameters of the image restoration network can be changed in the direction that reduces the function value of the loss function.
  • the foregoing image restoration network meeting preset requirements includes: the image restoration network meeting at least one of the following conditions:
  • the image restoration network meets at least one of the above conditions (1) to (3), the image restoration network meets the preset requirements, and the training process of the image restoration network ends; and when the image restoration network does not meet the above conditions (1) ) To (3), it means that the image restoration network has not yet met the preset requirements, and it is necessary to continue training the image restoration network, that is, it is necessary to repeat the above steps 2 to 4 until the image restoration network meets the preset requirements. Set requirements.
  • the aforementioned loss function includes a mean square error between at least one restored high-quality image and at least one real high-quality image in a training image pair.
  • the mean square error between two images refers to the mean value of the sum of squares of the difference in pixel values at corresponding positions between the two images.
  • the aforementioned loss function is an average value of at least one first loss function value, and each first loss function in the at least one first loss function is each restoration in the at least one restored high-quality image The mean square error between the high-quality image and the corresponding real high-quality image.
  • the aforementioned loss function further includes the perceptual loss and counter loss of the at least one restored high-quality image relative to the real high-quality image in the at least one training image pair.
  • the perceptual loss between two images may refer to the mean value of the two-norm sum of squares of the difference in the corresponding positions between the feature maps of the two images.
  • Confrontation loss is used to determine whether one image distribution is similar to another image distribution. Confrontation loss can be described by a discriminant neural network.
  • the two images are input to the discriminant neural network.
  • the output result of the discriminative neural network is that the image distributions of the two images are the same, and when the image distributions of the two images When the difference is large, the two images are input to the discriminant neural network, and the output result of the discriminant neural network is that the image distributions of the two images are not the same.
  • a discriminative neural network can be used to judge.
  • the discriminative neural network can perform feature extraction and feature transformation on the two images respectively, map them to a feature space, and then use a two-class classifier to give the probability that an image is synthesized and the probability that it is true.
  • the classifier cannot distinguish whether an image is synthesized or real, that is, when the probability of an image being synthesized and the probability of being true outputted by the classifier is less than a certain threshold, it is considered that the two are indistinguishable. At this time, the difference between the two images The image distribution is the same.
  • the above loss function includes multiple losses such as mean square error, perceptual loss, and counter loss, the information reflected by the loss function is more comprehensive. Therefore, using this loss function during training can train an image loss network with better image performance.
  • the above-mentioned image definition includes at least one of image blur degree, image noise distribution, and image resolution.
  • the approximate real low-quality image in each training image pair is obtained by processing the real high-quality image in each training image pair, including:
  • the approximate real low-quality images in each training image pair are obtained by processing the real high-quality images in each training image pair by using at least one of blurring processing, noise adding processing, and down-sampling processing.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, including: the existing real low-quality image is obtained by using the first
  • the image to be recovered is acquired by the device
  • the image to be restored is acquired by the second device.
  • the device type of the first device and the second device are the same, and the image acquisition parameters of the first device are the same as the image acquisition parameters of the second device.
  • the image acquisition parameters Including at least one of focal length, exposure, and shutter time.
  • the existing real low-quality image and the image to be restored are acquired by using the same type of equipment and using the same image sampling parameters, the existing real low-quality image is closer to the image to be restored, thereby making this application reference
  • the image restoration network obtained by the existing real low-quality image training has better image restoration effects when processing the image to be restored.
  • the approximate real low-quality image in each training image pair is obtained by adjusting the real high-quality image in each training image pair.
  • the adjustment process is used to adjust the image definition of the real high-quality image in each of the above-mentioned training image pairs, so that the image definition of the approximate real low-quality image in each of the above-mentioned training image pairs is the same as the existing real low-quality image
  • the image clarity of the image is as the same as possible.
  • the image clarity of the approximate real low-quality image in each of the above training image pairs is as same as the image clarity of the existing real low-quality images, which can specifically refer to the approximate real image in each training image pair.
  • the difference between the image clarity of the low-quality image and the image clarity of the existing real low-quality image is within a preset range, and the preset range can be flexibly set according to actual needs.
  • the image clarity of the real high-quality image in each training image pair can be adjusted, so that the image clarity of the adjusted image is The difference in definition with the existing real low-quality image is the smallest, and the adjusted image is the approximate real low-quality image in each of the above-mentioned training images.
  • the above-mentioned adjustment processing for the real high-quality image in each training image pair can be directly blurring, noise-adding, and down-sampling processing on the real high-quality image in each training image pair, or it can be
  • the image generation network is used to process the real high-quality images in each training image pair.
  • the approximate real low-quality image in each training image pair described above is a real high-quality image in each training image pair using a pre-trained image generation network. Resulted from processing.
  • the above-mentioned image generation network can be used to convert real high-quality images into approximate real low-quality images.
  • the image generation network can be obtained by training based on multiple real high-quality images and multiple real low-quality images. Specifically, in training During the process, multiple real high-quality images can be input into the image generation network, so that the image clarity of the output image is as different as possible from the image clarity of any one of the multiple real low-quality images mentioned above. small.
  • the training ends until the trained image generation network meets the preset requirements.
  • the end training conditions can be flexibly set according to the actual situation, for example, when the image processing performance of the trained image generation network meets the preset requirements.
  • image generation network training process it is also possible to synthesize multiple real high-quality images to obtain multiple synthetic low-quality images, and then use multiple synthetic low-quality images and multiple real low-quality image pairs.
  • the image generation network is trained.
  • the sharpness difference between the image sharpness of the approximate real low-quality image in each training image pair and the image sharpness of the existing real low-quality image Within the preset range includes: the distance between the feature vector of the approximate real low-quality image in each training image pair and the feature vector of the existing real low-quality image is less than the preset distance.
  • the above-mentioned preset distance can be flexibly set according to the actual situation.
  • the feature vector of the approximate real low-quality image in each training image pair can be obtained by a discriminant neural network (a kind of neural network) by feature extraction of the approximate real low-quality image in each training image pair.
  • the feature vectors of some real low-quality images can also be obtained by extracting features of existing real low-quality images based on the discriminant neural network.
  • the same convolution parameter may be used for extraction, and the specific parameter value of the convolution parameter may be obtained by training the discriminant neural network.
  • the foregoing acquiring multiple training image pairs includes: determining the multiple training image pairs from an initial training image set, wherein each of the multiple training image pairs has an approximate real low quality The difference between the image sharpness of the image and the image sharpness of the existing real low-quality image is less than a preset threshold.
  • the above preset threshold can be flexibly set according to the actual situation. If the requirements for the difference between image sharpness during the training process are strict, a smaller preset threshold can be set, and the difference between image sharpness during the training process If the requirements are not too strict, a larger preset threshold can be set.
  • the training effect can be improved, so that the image restoration network obtained after training has better image restoration performance.
  • a method for training an image restoration network includes the following steps:
  • Step A Initialize the network parameters of the image generation network and the network parameters of the image restoration network to obtain the initial values of the network parameters of the image generation network and the initial values of the network parameters of the image restoration network;
  • Step B input at least one synthetic low-quality image of the plurality of synthetic low-quality images to the first generation network in the image generation network for processing, so as to obtain at least one approximate real low-quality image;
  • Step C Input at least one real low-quality image among the plurality of real low-quality images to a second generation network in the image generation network for processing, to obtain at least one approximately synthesized low-quality image;
  • Step D Input at least one approximate real low-quality image into the second generation network for processing to obtain at least one reconstructed synthetic low-quality image;
  • Step E input at least one approximately synthesized low-quality image into the first generation network for processing to obtain at least one reconstructed real low-quality image;
  • Step F Input at least one approximate real low-quality image into the image restoration network for processing, to obtain at least one restored high-quality image;
  • Step G Determine the loss function, the loss function includes the first loss item, the second loss item and the third loss item;
  • Step H According to the function value of the loss function, the network parameters of the image generation network and the image restoration network are updated.
  • the multiple synthesized low-quality images in the above step B are obtained by respectively performing synthesis processing on multiple real high-quality images, and the synthesis processing includes at least one of blurring processing, noise adding processing, and down-sampling processing.
  • the multiple real low-quality images and the images to be restored in the above step C are collected by the same device, and the images to be restored are images processed by the image restoration network after the training is completed.
  • the same type of equipment mentioned above may refer to equipment of exactly the same model.
  • cameras of the same model For example, cameras of the same model, terminal devices of the same model, cameras of the same model, and so on.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, specifically including: the above-mentioned existing real low-quality image is collected by the first device, and the image to be restored is collected by the second device It is obtained that the device types of the first device and the second device are the same, and the image acquisition parameters of the first device are the same as the image acquisition parameters of the second device, and the image acquisition parameters include at least one of focal length, exposure, and shutter time.
  • the existing real low-quality image and the image to be restored are acquired by using the same type of equipment and using the same image sampling parameters, the existing real low-quality image is closer to the image to be restored, thereby making this application reference
  • the image restoration network obtained by the existing real low-quality image training has better image restoration effects when processing the image to be restored.
  • first loss item, second loss item and third loss item are respectively:
  • the first loss function term includes the confrontation loss of at least one approximate real low-quality image relative to any one of the multiple real low-quality images, and at least one approximate synthetic low-quality image relative to multiple synthetic low-quality images. Any one of synthetic low-quality images against loss;
  • the second loss function term includes the difference between the pixel value of at least one reconstructed synthetic low-quality image and the pixel value of at least one synthetic low-quality image, and at least one pixel value of the reconstructed real low-quality image and at least one pixel value of the real low-quality image The difference;
  • the third loss function term includes the mean square error between at least one restored high-quality image and at least one real high-quality image among the multiple real high-quality images, wherein at least one synthesized low-quality image is a comparison of at least one real high-quality image.
  • the image is synthesized by processing.
  • the approximate real low-quality image generated by the image generation network can be closer to the real low-quality image, so that the finally trained image restoration network has a better quality. Good image recovery performance.
  • the above-mentioned loss function further includes a fourth loss function term, and the fourth loss function term includes the difference between the at least one restored high-quality image and the at least one real high-quality image. Perceived loss and confrontational loss.
  • the aforementioned loss function also includes the fourth loss function item, the information reflected by the loss function is more comprehensive. Therefore, using this loss function during training can train an image loss network with better image recovery performance.
  • the above-mentioned loss function further includes a fifth loss function term
  • the above-mentioned method further includes: inputting at least one synthesized low-quality image to a second generation network for processing, to obtain at least A converted synthetic low-quality image; input at least one real low-quality image to the first generation network for processing to obtain at least one converted real low-quality image; wherein the fifth loss function item includes at least one converted real low-quality image The difference between the pixel value of and the pixel value of at least one real low-quality image, and the difference between the pixel value of at least one converted synthesized low-quality image and the pixel value of at least one synthesized low-quality image.
  • the above loss function also includes the fifth loss function item, the information reflected by the loss function is more comprehensive. Therefore, using this loss function during training can train an image loss network with better image recovery performance.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, including: the existing real low-quality image is obtained by using the first The image to be recovered is acquired by the device, and the image to be restored is acquired by the second device.
  • the device type of the first device and the second device are the same, and the image acquisition parameters of the first device are the same as the image acquisition parameters of the second device.
  • the image acquisition parameters Including at least one of focal length, exposure, and shutter time.
  • the existing real low-quality image and the image to be restored are acquired by using the same type of equipment and using the same image sampling parameters, the existing real low-quality image is closer to the image to be restored, thereby making this application reference
  • the image restoration network obtained by the existing real low-quality image training has better image restoration effects when processing the image to be restored.
  • an image restoration method includes: obtaining an image to be restored; using an image restoration network to perform restoration processing on the image to be restored to obtain a restored high-quality image with high image clarity. For the image clarity of the image to be restored.
  • the above-mentioned image restoration network is obtained by training based on multiple training image pairs.
  • Each training image pair in the multiple training image pairs includes a real high-quality image and an approximate real low-quality image, and each training image is in the middle
  • the approximate real low-quality image is obtained by processing the real high-quality image in each training image pair.
  • the image sharpness of the approximate real low-quality image is different from that of the existing real low-quality image.
  • the image definition includes at least one of the degree of image blur, the distribution of image noise, and the resolution of the image.
  • the existing real low-quality image and the image to be restored are collected by the same device.
  • the image restoration method of the third aspect described above may be an image restoration network trained by the training method of the first aspect described above.
  • the approximate real low-quality image contained in the training image pair is obtained by synthesizing and realizing the real high-quality image, that is to say, the approximate real low-quality image contained in the training image pair is relatively close to the real low-quality image. Therefore, according to the training of the image restoration network based on multiple training images proposed in this application, an image restoration network with better image restoration effects for real low-quality images can be obtained. Furthermore, the image restoration method of the present application can have a better image restoration effect when using the image restoration network obtained by training to perform image restoration.
  • the image definition includes at least one of the degree of image blur, the distribution of image noise, and the image resolution.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by using the same device, including: the existing real low-quality image is obtained by using the first The image to be recovered is acquired by the device, and the image to be restored is acquired by the second device.
  • the device type of the first device and the second device are the same, and the image acquisition parameters of the first device are the same as the image acquisition parameters of the second device.
  • the image acquisition parameters Including at least one of focal length, exposure, and shutter time.
  • the existing real low-quality image and the image to be restored are acquired by using the same type of equipment and using the same image sampling parameters, the existing real low-quality image is closer to the image to be restored, thereby making this application reference
  • the image restoration network obtained by the existing real low-quality image training has better image restoration effects when processing the image to be restored.
  • an image restoration method includes: acquiring an image to be restored; using an image restoration network to perform restoration processing on the image to be restored to obtain a restored high-quality image, which has high image clarity. For the image clarity of the image to be restored.
  • the image restoration network in the image restoration method of the fourth aspect is obtained by training according to the training method of the second aspect.
  • the approximate real low-quality image generated by the image generation network can be closer to the real low-quality image, so that the application uses the image restoration network obtained by the joint training method to perform image restoration It has better image recovery performance during processing.
  • a training device for an image restoration network includes modules for executing the method in the first aspect or the second aspect.
  • an image restoration device including each module used to execute the method in the third aspect or the fourth aspect.
  • a training device for an image restoration network includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, The processor is configured to execute the method in the first aspect or the second aspect described above.
  • an image restoration device includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the processing The device is used to execute the method in the third aspect or the fourth aspect.
  • a computer device in a ninth aspect, includes the image restoration network training device in the fourth aspect.
  • the computer device may specifically be a server or a cloud device or the like.
  • an electronic device in a tenth aspect, includes the image restoration apparatus of the sixth aspect described above.
  • the electronic device may specifically be a mobile terminal (for example, a smart phone), a tablet computer, a notebook computer, an augmented reality/virtual reality device, a vehicle-mounted terminal device, and so on.
  • a mobile terminal for example, a smart phone
  • a tablet computer for example, a tablet computer
  • a notebook computer for example, a tablet computer
  • an augmented reality/virtual reality device for example, a vehicle-mounted terminal device, and so on.
  • a computer-readable storage medium stores program code, and the program code includes any one of the first aspect, the second aspect, the third aspect, and the fourth aspect. Instructions for steps in a method.
  • a computer program product containing instructions, which when the computer program product runs on a computer, causes the computer to execute any one of the first, second, third, and fourth aspects above method.
  • a chip in a thirteenth aspect, includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface, and executes the first, second, and third aspects described above. And any one of the methods in the fourth aspect.
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute any one of the above-mentioned first aspect, second aspect, third aspect, and fourth aspect.
  • the above-mentioned chip may specifically be a field programmable gate array FPGA or an application-specific integrated circuit ASIC.
  • the method in the first aspect may specifically refer to the method in the first aspect and any one of the various implementation manners in the first aspect
  • the method in the second aspect may specifically refer to the second aspect
  • Aspect and the method in any one of the various implementation manners in the second aspect, the method in the third aspect may specifically refer to the third aspect and any one of the various implementation manners in the third aspect Methods.
  • FIG. 1 is a schematic structural diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of target detection using the convolutional neural network model provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a chip hardware structure provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a training method of an image restoration network according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of an image restoration method according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a training method of an image restoration network according to an embodiment of the present application.
  • Figure 8 shows the process of processing a real high-quality image to obtain an approximate real low-quality image
  • FIG. 9 is a schematic diagram of using an image generation network to process multiple approximate real low-quality images to obtain multiple composite low-quality images
  • FIG. 10 is a schematic diagram of training an image restoration network according to training images in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a training method of an image restoration network according to an embodiment of the present application.
  • Figure 12 is a schematic diagram of joint training of an image restoration network and an image generation network
  • FIG. 13 is a schematic diagram of the process of determining the loss function of the first stage
  • FIG. 14 is a schematic diagram of the process of determining the loss function of the second stage
  • 15 is a schematic diagram of the process of determining the loss function of the first stage
  • FIG. 16 is a schematic diagram of the process of determining the loss function of the first stage
  • FIG. 17 is a schematic diagram of the process of determining the loss function of the second stage
  • FIG. 18 is a schematic flowchart of an image restoration network method according to an embodiment of the present application.
  • FIG. 19 is a schematic block diagram of a training device for an image restoration network according to an embodiment of the present application.
  • 20 is a schematic diagram of the hardware structure of an image restoration network training device according to an embodiment of the present application.
  • FIG. 21 is a schematic block diagram of an image device according to an embodiment of the present application.
  • FIG. 22 is a schematic diagram of the hardware structure of an image restoration device according to an embodiment of the present application.
  • the solution of the present application can be applied to areas that require image processing (for example, image classification, image recognition) in computer vision fields such as assisted driving, autonomous driving, safe cities, and smart terminals.
  • image processing for example, image classification, image recognition
  • image restoration can be performed on low-quality images to obtain high-quality images, and then image classification or image recognition can be performed on high-quality images.
  • the quality of images captured by surveillance cameras is generally low, which will affect the accuracy of people or recognition algorithms in identifying targets and judging events. Therefore, it is necessary to improve the resolution and clarity of these images, that is, The image needs to be restored to facilitate subsequent accurate judgments based on the restored image.
  • a neural network (model) can be used for image restoration.
  • a neural network can be composed of neural units.
  • a neural unit can refer to an arithmetic unit that takes x s and intercept 1 as inputs.
  • the output of the arithmetic unit can be as shown in formula (1):
  • s 1, 2,...n, n is a natural number greater than 1
  • W s is the weight of x s
  • b is the bias of the neural unit.
  • f is the activation function of the neural unit, which is used to perform non-linear transformation of the features in the neural network, thereby converting the input signal in the neural unit into an output signal.
  • the output signal of the activation function can be used as the input of the next convolutional layer, and the activation function can be a sigmoid function.
  • a neural network is a network formed by connecting multiple above-mentioned single neural units together, that is, the output of one neural unit can be the input of another neural unit.
  • the input of each neural unit can be connected with the local receptive field of the previous layer to extract the characteristics of the local receptive field.
  • the local receptive field can be a region composed of several neural units.
  • a deep neural network can also be called a multi-layer neural network.
  • DNN can be understood as a neural network with multiple hidden layers.
  • the DNN is divided according to the positions of different layers.
  • the neural network inside the DNN can be divided into three categories: input layer, hidden layer, and output layer. Generally speaking, the first layer is the input layer, the last layer is the output layer, and the number of layers in the middle are all hidden layers.
  • the layers are fully connected, that is to say, any neuron in the i-th layer must be connected to any neuron in the i+1th layer.
  • DNN looks complicated, it is not complicated as far as the work of each layer is concerned. Simply put, it is the following linear relationship expression: among them, Is the input vector, Is the output vector, Is the offset vector, W is the weight matrix (also called coefficient), and ⁇ () is the activation function.
  • Each layer is just the input vector After such a simple operation, the output vector is obtained Due to the large number of DNN layers, the coefficient W and the offset vector The number is also relatively large.
  • DNN The definition of these parameters in DNN is as follows: Take the coefficient W as an example, suppose that in a three-layer DNN, the linear coefficients from the fourth neuron in the second layer to the second neuron in the third layer are defined as The superscript 3 represents the number of layers where the coefficient W is located, and the subscript corresponds to the output third-level index 2 and the input second-level index 4.
  • the coefficient from the kth neuron in the L-1th layer to the jth neuron in the Lth layer is defined as
  • Convolutional neural network (convolutional neuron network, CNN) is a deep neural network with a convolutional structure.
  • the convolutional neural network contains a feature extractor composed of a convolutional layer and a sub-sampling layer.
  • the feature extractor can be regarded as a filter.
  • the convolutional layer refers to the neuron layer that performs convolution processing on the input signal in the convolutional neural network.
  • a neuron can only be connected to a part of the neighboring neurons.
  • a convolutional layer usually contains several feature planes, and each feature plane can be composed of some rectangularly arranged neural units. Neural units in the same feature plane share weights, and the shared weights here are the convolution kernels.
  • Sharing weight can be understood as the way of extracting image information has nothing to do with location.
  • the convolution kernel can be initialized in the form of a matrix of random size, and the convolution kernel can obtain reasonable weights through learning during the training process of the convolutional neural network.
  • the direct benefit of sharing weights is to reduce the connections between the layers of the convolutional neural network, and at the same time reduce the risk of overfitting.
  • the residual network is a deep convolutional network proposed in 2015. Compared with the traditional convolutional neural network, the residual network is easier to optimize and can increase the accuracy by adding a considerable depth.
  • the core of the residual network is to solve the side effect (degradation problem) caused by increasing the depth, so that the network performance can be improved by simply increasing the network depth.
  • the residual network generally contains many sub-modules with the same structure.
  • a residual network (ResNet) is usually used to connect a number to indicate the number of times the sub-module is repeated. For example, ResNet50 means that there are 50 sub-modules in the residual network.
  • the classifier is generally composed of a fully connected layer and a softmax function (which can be called a normalized exponential function), and can output different types of probabilities according to the input.
  • Important equation taking the loss function as an example, the higher the output value (loss) of the loss function, the greater the difference, then the training of the deep neural network becomes a process of reducing this loss as much as possible.
  • the neural network can use the backpropagation (BP) algorithm to modify the parameter values in the initial neural network model during the training process, so that the reconstruction error loss of the neural network model becomes smaller and smaller. Specifically, forwarding the input signal until the output will cause error loss, and the parameters in the initial neural network model are updated by backpropagating the error loss information, so that the error loss is converged.
  • the backpropagation algorithm is a backpropagation motion dominated by error loss, and aims to obtain the optimal parameters of the neural network model, such as the weight matrix.
  • FIG. 1 is a schematic diagram of the system architecture of an embodiment of the present application.
  • the system architecture 100 includes an execution device 110, a training device 120, a database 130, a client device 140, a data storage system 150, and a data collection system 160.
  • the execution device 110 includes a calculation module 111, an I/O interface 112, a preprocessing module 113, and a preprocessing module 114.
  • the calculation module 111 may include the target model/rule 101, and the preprocessing module 113 and the preprocessing module 114 are optional.
  • the data collection device 160 is used to collect training data.
  • the training data may include a real high-quality image and an approximate real low-quality image corresponding to the real high-quality image.
  • the data collection device 160 stores the training data in the database 130, and the training device 120 trains to obtain the target model/rule 101 based on the training data maintained in the database 130.
  • the training device 120 performs image restoration on the input approximate real high-quality image to obtain the restored high-quality image.
  • the restored high-quality image is compared with The real high-quality images are compared, and the image restoration network is updated according to the difference between the restored high-quality images and the real high-quality images until the image restoration network meets the preset requirements, thereby completing the training of the target model/rule 101.
  • the target model/rule 101 here is equivalent to an image restoration network.
  • the above-mentioned target model/rule 101 can be used to implement the image restoration method of the embodiment of the present application, that is, the image to be restored (the image to be restored may be an input low-quality image that requires image restoration) is input into the target model/rule 101, Then, the restored high-quality image can be obtained after image restoration processing of the image to be restored.
  • the target model/rule 101 in the embodiment of the present application may specifically be a neural network. It should be noted that, in actual applications, the training data maintained in the database 130 may not all come from the collection of the data collection device 160, and may also be received from other devices.
  • the training device 120 does not necessarily perform the training of the target model/rule 101 completely based on the training data maintained by the database 130. It may also obtain training data from the cloud or other places for model training. The above description should not be used as a reference to this application. Limitations of the embodiment.
  • the target model/rule 101 trained according to the training device 120 can be applied to different systems or devices, such as the execution device 110 shown in FIG. 1, which can be a terminal, such as a mobile phone terminal, a tablet computer, notebook computers, augmented reality (AR)/virtual reality (VR), in-vehicle terminals, etc., can also be servers or clouds.
  • the execution device 110 is configured with an input/output (input/output, I/O) interface 112 for data interaction with external devices.
  • the user can input data to the I/O interface 112 through the client device 140.
  • the input data in this embodiment of the application may include: the image to be restored input by the client device.
  • the client device 140 here may specifically be a terminal device.
  • the preprocessing module 113 and the preprocessing module 114 are used to perform preprocessing according to the input data (such as the image to be restored) received by the I/O interface 112.
  • the preprocessing module 113 and the preprocessing module 114 may be omitted. Or there is only one preprocessing module.
  • the calculation module 111 can be directly used to process the input data.
  • the execution device 110 may call data, codes, etc. in the data storage system 150 for corresponding processing .
  • the data, instructions, etc. obtained by corresponding processing may also be stored in the data storage system 150.
  • the I/O interface 112 presents the processing result (specifically, a high-quality image obtained by image restoration), such as the restored high-quality image obtained by performing image restoration processing on the target model/rule 101 on the image to be restored, to the client device 140, So as to provide it to users.
  • a high-quality image obtained by image restoration such as the restored high-quality image obtained by performing image restoration processing on the target model/rule 101 on the image to be restored
  • the high-quality image obtained by performing image restoration through the target model/rule 101 in the calculation module 111 can be processed by the preprocessing module 113 (the processing of the preprocessing module 114 may also be added) (for example, image rendering processing is performed). ) After that, the processing result is sent to the I/O interface, and then the processing result is sent to the client device 140 through the I/O interface for display.
  • the calculation module 111 may also transmit the high-quality image obtained through image restoration processing to the I/O interface, and then use the I/O The interface sends the processing result to the client device 140 for display.
  • the training device 120 can target different targets or tasks (for example, the training device can train for real high-quality images and approximate low-quality images in different scenarios), and generate corresponding targets based on different training data.
  • Model/rule 101, the corresponding target model/rule 101 can be used to achieve the above goals or complete the above tasks, so as to provide users with desired results.
  • the user can manually set input data (the input data may be an image to be restored), and the manual setting can be operated through an interface provided by the I/O interface 112.
  • the client device 140 can automatically send input data to the I/O interface 112. If the client device 140 is required to automatically send the input data and the user's authorization is required, the user can set the corresponding authority in the client device 140.
  • the user can view the result output by the execution device 110 on the client device 140, and the specific presentation form may be a specific manner such as display, sound, and action.
  • the client device 140 can also be used as a data collection terminal to collect the input data of the input I/O interface 112 and the output result of the output I/O interface 112 as new sample data, and store it in the database 130 as shown in the figure.
  • the I/O interface 112 directly uses the input data input to the I/O interface 112 and the output result of the output I/O interface 112 as a new sample as shown in the figure.
  • the data is stored in the database 130.
  • FIG. 1 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship between the devices, devices, modules, etc. shown in the figure does not constitute any limitation.
  • the data The storage system 150 is an external memory relative to the execution device 110. In other cases, the data storage system 150 may also be placed in the execution device 110.
  • the target model/rule 101 obtained by training according to the training device 120 may be the neural network in the embodiment of the present application.
  • the neural network provided in the embodiment of the present application may be a CNN and a deep convolutional neural network ( deep convolutional neural networks, DCNN) and so on.
  • CNN is a very common neural network
  • the structure of CNN will be introduced in detail below in conjunction with Figure 2.
  • a convolutional neural network is a deep neural network with a convolutional structure. It is a deep learning architecture.
  • the deep learning architecture refers to the algorithm of machine learning. Multi-level learning is carried out on the abstract level of the system.
  • CNN is a feed-forward artificial neural network. Each neuron in the feed-forward artificial neural network can respond to the input image.
  • a convolutional neural network (CNN) 200 may include an input layer 210, a convolutional layer/pooling layer 220 (the pooling layer is optional), and a fully connected layer 230.
  • CNN convolutional neural network
  • the convolutional layer/pooling layer 220 shown in FIG. 2 may include layers 221-226 as shown in the examples.
  • layer 221 is a convolutional layer
  • layer 222 is a pooling layer
  • layer 223 is a convolutional layer.
  • Layers, 224 is the pooling layer
  • 225 is the convolutional layer
  • 226 is the pooling layer; in another implementation, 221 and 222 are the convolutional layers, 223 is the pooling layer, and 224 and 225 are the convolutional layers.
  • Layer, 226 is the pooling layer. That is, the output of the convolutional layer can be used as the input of the subsequent pooling layer, or as the input of another convolutional layer to continue the convolution operation.
  • the convolution layer 221 can include many convolution operators.
  • the convolution operator is also called a kernel. Its function in image processing is equivalent to a filter that extracts specific information from the input image matrix.
  • the convolution operator is essentially It can be a weight matrix. This weight matrix is usually pre-defined. In the process of convolution on the image, the weight matrix is usually one pixel after one pixel (or two pixels after two pixels) along the horizontal direction on the input image. ...It depends on the value of stride) to complete the work of extracting specific features from the image.
  • the size of the weight matrix should be related to the size of the image. It should be noted that the depth dimension of the weight matrix and the depth dimension of the input image are the same.
  • the weight matrix will extend to Enter the entire depth of the image. Therefore, convolution with a single weight matrix will produce a single depth dimension convolution output, but in most cases, a single weight matrix is not used, but multiple weight matrices of the same size (row ⁇ column) are applied. That is, multiple homogeneous matrices.
  • the output of each weight matrix is stacked to form the depth dimension of the convolutional image, where the dimension can be understood as determined by the "multiple" mentioned above.
  • Different weight matrices can be used to extract different features in the image. For example, one weight matrix is used to extract edge information of the image, another weight matrix is used to extract specific colors of the image, and another weight matrix is used to eliminate unwanted noise in the image.
  • the multiple weight matrices have the same size (row ⁇ column), the size of the convolution feature maps extracted by the multiple weight matrices of the same size are also the same, and then the multiple extracted convolution feature maps of the same size are merged to form The output of the convolution operation.
  • weight values in these weight matrices need to be obtained through a lot of training in practical applications.
  • Each weight matrix formed by the weight values obtained through training can be used to extract information from the input image, so that the convolutional neural network 200 can make correct predictions. .
  • the initial convolutional layer (such as 221) often extracts more general features, which can also be called low-level features; with the convolutional neural network
  • the features extracted by the subsequent convolutional layers (for example, 226) become more and more complex, such as features such as high-level semantics, and features with higher semantics are more suitable for the problem to be solved.
  • the pooling layer can be a convolutional layer followed by a layer.
  • the pooling layer can also be a multi-layer convolutional layer followed by one or more pooling layers.
  • the sole purpose of the pooling layer is to reduce the size of the image space.
  • the pooling layer may include an average pooling operator and/or a maximum pooling operator for sampling the input image to obtain an image with a smaller size.
  • the average pooling operator can calculate the pixel values in the image within a specific range to generate an average value as the result of the average pooling.
  • the maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling.
  • the operators in the pooling layer should also be related to the image size.
  • the size of the image output after processing by the pooling layer can be smaller than the size of the image of the input pooling layer.
  • Each pixel in the image output by the pooling layer represents the average or maximum value of the corresponding sub-region of the image input to the pooling layer.
  • the convolutional neural network 200 After processing by the convolutional layer/pooling layer 220, the convolutional neural network 200 is not enough to output the required output information. Because as mentioned above, the convolutional layer/pooling layer 220 only extracts features and reduces the parameters brought by the input image. However, in order to generate the final output information (the required class information or other related information), the convolutional neural network 200 needs to use the fully connected layer 230 to generate one or a group of required classes of output. Therefore, the fully connected layer 230 may include multiple hidden layers (231, 232 to 23n as shown in FIG. 2) and an output layer 240. The parameters contained in the multiple hidden layers can be based on specific task types. The relevant training data of the, for example, the task type can include image recognition, image classification, image super-resolution reconstruction and so on.
  • the output layer 240 After the multiple hidden layers in the fully connected layer 230, that is, the final layer of the entire convolutional neural network 200 is the output layer 240.
  • the output layer 240 has a loss function similar to the classification cross entropy, which is specifically used to calculate the prediction error.
  • the convolutional neural network 200 shown in FIG. 2 is only used as an example of a convolutional neural network. In specific applications, the convolutional neural network may also exist in the form of other network models.
  • CNN convolutional neural network
  • FIG. 2 may be used to execute the image restoration method of the embodiment of the present application.
  • the image to be restored passes through the input layer 210 and the convolutional layer/pooling layer 220. After processing with the fully connected layer 230, high-quality images can be recovered.
  • FIG. 3 is a hardware structure of a chip provided by an embodiment of the application, and the chip includes a neural network processor 50.
  • the chip can be set in the execution device 110 as shown in FIG. 1 to complete the calculation work of the calculation module 111.
  • the chip can also be set in the training device 120 as shown in FIG. 1 to complete the training work of the training device 120 and output the target model/rule 101.
  • the algorithms of each layer in the convolutional neural network as shown in Fig. 2 can be implemented in the chip as shown in Fig. 3.
  • a neural network processor (neural-network processing unit, NPU) 50 is mounted on a main central processing unit (central processing unit, CPU) (host CPU) as a coprocessor, and the main CPU allocates tasks.
  • the core part of the NPU is the arithmetic circuit 503.
  • the controller 504 controls the arithmetic circuit 503 to extract data from the memory (weight memory or input memory) and perform calculations.
  • the arithmetic circuit 503 includes multiple processing units (process engines, PE). In some implementations, the arithmetic circuit 503 is a two-dimensional systolic array. The arithmetic circuit 503 may also be a one-dimensional systolic array or other electronic circuit capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 503 is a general-purpose matrix processor.
  • the arithmetic circuit 503 fetches the data corresponding to matrix B from the weight memory 502 and caches it on each PE in the arithmetic circuit 503.
  • the arithmetic circuit 503 takes the matrix A data and the matrix B from the input memory 501 to perform matrix operations, and the partial result or final result of the obtained matrix is stored in an accumulator 508.
  • the vector calculation unit 507 can perform further processing on the output of the arithmetic circuit 503, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, and so on.
  • the vector calculation unit 507 can be used for network calculations in the non-convolutional/non-FC layer of the neural network, such as pooling, batch normalization, local response normalization, etc. .
  • the vector calculation unit 507 can store the processed output vector to the unified buffer 506.
  • the vector calculation unit 507 may apply a nonlinear function to the output of the arithmetic circuit 503, such as a vector of accumulated values, to generate the activation value.
  • the vector calculation unit 507 generates a normalized value, a combined value, or both.
  • the processed output vector can be used as an activation input to the arithmetic circuit 503, for example for use in a subsequent layer in a neural network.
  • the unified memory 506 is used to store input data and output data.
  • the weight data directly transfers the input data in the external memory to the input memory 501 and/or the unified memory 506 through the storage unit access controller 505 (direct memory access controller, DMAC), and stores the weight data in the external memory into the weight memory 502, And the data in the unified memory 506 is stored in the external memory.
  • DMAC direct memory access controller
  • the bus interface unit (BIU) 510 is used to implement interaction between the main CPU, the DMAC, and the instruction fetch memory 509 through the bus.
  • An instruction fetch buffer 509 connected to the controller 504 is used to store instructions used by the controller 504;
  • the controller 504 is used to call the instructions cached in the memory 509 to control the working process of the computing accelerator.
  • the unified memory 506, the input memory 501, the weight memory 502, and the instruction fetch memory 509 are all on-chip memories.
  • the external memory is a memory external to the NPU.
  • the external memory can be a double data rate synchronous dynamic random access memory.
  • Memory double data rate synchronous dynamic random access memory, referred to as DDR SDRAM
  • HBM high bandwidth memory
  • each layer in the convolutional neural network shown in FIG. 2 may be executed by the arithmetic circuit 503 or the vector calculation unit 507.
  • an embodiment of the present application provides a system architecture 300.
  • the system architecture includes a local device 301, a local device 302, an execution device 210 and a data storage system 250, where the local device 301 and the local device 302 are connected to the execution device 210 through a communication network.
  • the execution device 210 may be implemented by one or more servers.
  • the execution device 210 can be used in conjunction with other computing devices, such as data storage, routers, load balancers, and other devices.
  • the execution device 210 may be arranged on one physical site or distributed on multiple physical sites.
  • the execution device 210 can use the data in the data storage system 250 or call the program code in the data storage system 250 to implement the image restoration method of the embodiment of the present application.
  • Each local device can represent any computing device, such as personal computers, computer workstations, smart phones, tablets, smart cameras, smart cars or other types of cellular phones, media consumption devices, wearable devices, set-top boxes, game consoles, etc.
  • the local device of each user can interact with the execution device 210 through a communication network of any communication mechanism/communication standard.
  • the communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
  • the local device 301 and the local device 302 obtain the network parameters of the image restoration network from the execution device 210, deploy the image restoration network on the local device 301 and the local device 302, and use the image restoration network to perform image restoration .
  • the image restoration network can be directly deployed on the execution device 210, and the execution device 210 obtains the image to be restored from the local device 301 and the local device 302 (the local device 301 and the local device 302 can upload the image to be restored to The execution device 210), and performs image restoration on the image to be restored according to the image restoration network, and sends the high-quality image obtained by the image restoration to the local device 301 and the local device 302.
  • the above-mentioned execution device 210 may also be referred to as a cloud device. At this time, the execution device 210 is generally deployed in the cloud.
  • Fig. 5 is a schematic diagram of a training method of an image restoration network according to an embodiment of the present application.
  • the image restoration network can be trained based on the multiple training image pairs to obtain a trained image restoration network.
  • the trained image restoration network can be used for image restoration, thereby transforming the input low-quality image into a high-quality image, and improving the display effect of the image.
  • the image restoration network may also be referred to as an image restoration model, and the image restoration network may be a neural network.
  • Fig. 6 is a schematic diagram of an image restoration method according to an embodiment of the present application.
  • the image restoration network can perform restoration processing on the image to be restored (generally a low-quality image) to obtain a restored high-quality image.
  • the image restoration network in FIG. 6 may be obtained by training using the training method of the image restoration network shown in FIG. 5.
  • FIG. 7 is a schematic flowchart of a training method of an image restoration network according to an embodiment of the present application.
  • the method shown in FIG. 7 can be executed by the training device of the image restoration network in the embodiment of the present application.
  • the method shown in FIG. 7 includes steps 1001 and 1002. Steps 1001 and 1002 are respectively described in detail below.
  • Each of the above-mentioned multiple training image pairs includes a real high-quality image and an approximate real low-quality image.
  • the approximate real low-quality image is obtained by processing the real high-quality image in the training image pair, and the image that approximates the real low-quality image in each training image pair
  • the degree of difference between the sharpness and the image sharpness of the existing real low-quality image is within a preset range.
  • the above-mentioned image definition includes at least one of the degree of image blur, the distribution of image noise, and the image resolution.
  • the approximate real low-quality image in each of the above-mentioned training image pairs is obtained by adjusting the real high-quality image in each of the above-mentioned training image pairs, and the above-mentioned adjustment processing is used to center each of the above-mentioned training image pairs.
  • the image definition of the real high-quality image is adjusted so that the image definition of the approximate real low-quality image in each training image pair is as same as the image definition of the existing real low-quality image.
  • the image clarity of the approximate real low-quality image in each of the above training image pairs is as same as the image clarity of the existing real low-quality images, which can specifically refer to the approximate real image in each training image pair.
  • the difference between the image clarity of the low-quality image and the image clarity of the existing real low-quality image is within a preset range, and the preset range can be flexibly set according to actual needs.
  • the first method is to obtain an approximate real low-quality image through processing such as blurring processing, noise adding processing, and down-sampling processing.
  • the above-mentioned adjustment processing on the real high-quality image in each training image pair can be directly blurring and adding noise processing on the real high-quality image in each training image pair. And down-sampling processing and other processing.
  • the approximate real low-quality image in each training image pair is obtained by performing synthesis processing and realizing processing on the real high-quality image in the training image pair.
  • Figure 8 shows the process of processing a real high-quality image to obtain an approximate real low-quality image.
  • the real high-quality image in the image pair can be synthesized first to obtain a synthesized low-quality image, and then the synthesized low-quality image can be synthesized. Realization processing, so as to obtain the approximate real low-quality image in the image pair.
  • the above-mentioned synthesis processing may include at least one of blurring processing, noise adding processing, and down-sampling processing.
  • the above-mentioned realizing processing is used to adjust the sharpness of the image so that the adjusted image sharpness is different from the existing ones.
  • the image clarity of the real low-quality images is as the same as possible.
  • the above-mentioned existing real low-quality images and the images to be restored that are processed by the trained image restoration network during subsequent image restoration are acquired by using the same device. That is to say, the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, and the image to be restored is an image processed by the image restoration network after the training is completed.
  • the second method Use image generation network to process to obtain near-real low-quality images.
  • the approximate real low-quality image in each training image pair is obtained by processing the real high-quality image in each training image pair by using a pre-trained image generation network.
  • the above-mentioned image generation network can be used to convert real high-quality images into approximate real low-quality images.
  • the image generation network can be obtained by training based on multiple real high-quality images and multiple real low-quality images. Specifically, in training During the process, multiple real high-quality images can be input into the image generation network, so that the image clarity of the output image is as different as possible from the image clarity of any one of the multiple real low-quality images mentioned above. small.
  • the training is ended until the trained image generation network meets the preset requirements.
  • the end training conditions can be flexibly set according to the actual situation, for example, when the image processing performance of the trained image generation network meets the preset requirements.
  • image generation network training process it is also possible to synthesize multiple real high-quality images to obtain multiple synthetic low-quality images, and then use multiple synthetic low-quality images and multiple real low-quality image pairs.
  • the image generation network is trained.
  • the sharpness difference between the image sharpness of the approximate real low-quality image in each of the above-mentioned training image pairs and the image sharpness of the existing real low-quality image is within a preset range, including: each of the above-mentioned training images
  • the distance between the feature vector of the approximate real low-quality image and the feature vector of the existing real low-quality image in the image pair is less than the preset distance.
  • the above-mentioned preset distance can be flexibly set according to the actual situation.
  • the feature vector of the approximate real low-quality image in each training image pair can be obtained by a discriminant neural network (also called a discriminator) by feature extraction of the approximate real low-quality image in each training image pair, and
  • the feature vector of the above-mentioned existing real low-quality image may also be obtained by feature extraction of the existing real low-quality image according to the discriminant neural network.
  • the same convolution parameter may be used for extraction, and the specific parameter value of the convolution parameter may be obtained by training the discriminant neural network.
  • the foregoing acquiring multiple training image pairs includes: determining the multiple training image pairs from an initial training image set, wherein each of the multiple training image pairs has an approximate real low quality The difference between the image sharpness of the image and the image sharpness of the existing real low-quality image is less than a preset threshold.
  • the above preset threshold can be flexibly set according to the actual situation. If the requirements for the difference between image sharpness during the training process are strict, a smaller preset threshold can be set, and the difference between image sharpness during the training process If the requirements are not too strict, a larger preset threshold can be set.
  • the initial training image set includes 100 training image pairs, then 40 training image pairs can be selected from them, and the image clarity of each training image pair in the 40 training image pairs is similar to that of the true low-quality image.
  • the difference degree of the image sharpness of the existing real low-quality images is less than the preset threshold.
  • the training effect can be improved, so that the image restoration network obtained after training has better image restoration performance.
  • FIG. 9 is a schematic diagram of processing multiple approximate real low-quality images using an image generation network to obtain multiple composite low-quality images.
  • the multiple approximate real low-quality images in the multiple training image pairs may be obtained by processing multiple synthetic low-quality images through an image generation network.
  • the image generation network shown in FIG. 9 includes a first generation network and a second generation network. Using the first generation network, multiple synthetic low-quality images can be converted into multiple near-real low-quality images.
  • the image generation network shown in Figure 9 can specifically be cycle-consistent adversarial networks (CycleGAN), unsupervised image-to-image training network (UNIT), and Unified generation of adversarial networks (unified generic adversarial networks for multi-domain image-to-image translation, StarGAN) for multi-domain image-to-image conversion, and diverse image-to-image translation (DRIT), etc.
  • CycleGAN cycle-consistent adversarial networks
  • UAT unsupervised image-to-image training network
  • Unified generation of adversarial networks unified generic adversarial networks for multi-domain image-to-image translation, StarGAN) for multi-domain image-to-image conversion, and diverse image-to-image translation (DRIT), etc.
  • the internet can specifically be cycle-consistent adversarial networks (CycleGAN), unsupervised image-to-image training network (UNIT), and Unified generation of adversarial networks (unified generic adversarial networks for multi-domain image-to-image translation, StarGAN) for multi-domain image
  • the training of the image restoration network based on multiple training images proposed in this application can obtain an image restoration network with better image restoration effects for real low-quality images.
  • FIG. 10 is a schematic diagram of training an image restoration network according to training images in an embodiment of the present application.
  • the method shown in FIG. 10 is equivalent to step 1002 in the method shown in FIG. 7.
  • the training process shown in FIG. 10 includes steps 2001 to 2007, which are described in detail below.
  • Step 2001 indicates that a plurality of training images are used to train the image restoration network, and the training process of the image restoration network starts.
  • the network parameters of the image recovery network can be randomly initialized, so that the network parameters of the image recovery network take some values randomly.
  • step 2002 it is also possible to synthesize low-quality images and real high-quality images to perform preliminary training on the image restoration network first, and use the network parameter values of the image restoration network obtained from the preliminary training as the network parameters of the image restoration network The initial value of, and then formal training in the subsequent steps.
  • At least one training image pair can be used for each training, that is to say, it is possible to input an approximate real low-quality image of a training image pair to the image restoration network each time during the training process.
  • Quality image This application does not limit the number of images that are input to the image restoration network for processing each time.
  • the function value of the loss function can be determined according to the difference between the restored high-quality image and the real high-quality image.
  • the function value of the function has a positive correlation with the difference between the restored high-quality image and the real high-quality image.
  • the function value of the image loss function is also greater, and when the at least one restored high-quality image is at least The smaller the difference between the real high-quality images in a training image pair, the smaller the function value of the image loss function.
  • the respective corresponding losses can be determined according to the difference between the multiple restored high-quality images and their corresponding real high-quality images.
  • the function value of the function, and then the function value of the corresponding loss function is summed or averaged to obtain the final function value of the loss function in step 2004.
  • the function value of the aforementioned loss function includes a mean squared error (MSE) between the at least one restored high-quality image and the real high-quality image in the at least one training image pair.
  • MSE mean squared error
  • the mean square error between two images refers to the mean value of the sum of squares of the difference in pixel values at corresponding positions between the two images.
  • the function value of the above loss function includes the mean square error loss (MSE loss) between at least one restored high-quality image and the real high-quality image in the at least one training image pair.
  • MSE loss mean square error loss
  • the aforementioned loss function is an average value of at least one first loss function value, and each first loss function in the at least one first loss function is each restoration in the at least one restored high-quality image The mean square error between the high-quality image and the corresponding real high-quality image.
  • the at least one restored high-quality image only includes the restored high-quality image A
  • the real high-quality image in the at least one training image pair only includes the real high-quality image A'
  • the function value of the above loss function may be It is the square of the difference between the pixel value of the restored high-quality image A and the pixel value of the real high-quality image A'.
  • the at least one restored high-quality image includes the restored high-quality image A and the restored high-quality image B
  • the real high-quality image in the at least one training image pair includes the real high-quality image A'and the real high-quality image.
  • the function value of the above loss function can be the sum or average value of the square of the first difference and the square of the second difference, where the first difference is the pixel value of the restored high-quality image A and The difference between the pixel values of the real high-quality image A', and the second difference is the difference between the pixel values of the restored high-quality image B and the pixel values of the real high-quality image B'.
  • the aforementioned loss function further includes the perceptual loss and counter loss of the at least one restored high-quality image relative to the real high-quality image in the at least one training image pair.
  • the perceptual loss between two images may refer to the mean value of the two-norm sum of squares of the difference in the corresponding positions between the feature maps of the two images.
  • Confrontation loss is generally used to determine whether one image distribution is similar to another image distribution. Confrontation loss can be described by a discriminant neural network.
  • the two images are input to the discriminant neural network.
  • the output result of the discriminative neural network is that the image distributions of the two images are the same, and when the image distributions of the two images When the difference is large, the two images are input to the discriminant neural network, and the output result of the discriminant neural network is that the image distributions of the two images are not the same.
  • the network parameters of the image restoration network were updated according to the function value of the loss function.
  • the network parameters of the image restoration network may be changed according to the function value of the loss function, so that the function value of the loss function obtained by subsequent calculations is as small as possible.
  • the foregoing image restoration network meeting preset requirements includes: the image restoration network meeting at least one of the following conditions:
  • step 2006 when the image restoration network satisfies at least one of the above conditions (1) to (3), it is determined that the image restoration network meets the preset requirements, and step 2007 is executed.
  • the training process of the image restoration network ends, and when the image restoration When the network does not meet any of the above conditions (1) to (3), it means that the image restoration network has not yet met the preset requirements, and it is necessary to continue training the image restoration network, that is, re-execute steps 2002 to 2006 until it is satisfied Preset the requested image to restore the network.
  • Step 2007 indicates that the image restoration network has met the preset requirements, and the training process of the image restoration network ends.
  • the image restoration network in addition to training the image restoration network separately based on the acquired training images, can also be jointly trained with a generation network that approximates real low-quality images, and the first can be constrained through joint training.
  • the generated quality of the approximate real low-quality image obtained in the stage is similar to the real low-quality image as much as possible, and finally an image restoration network with better image restoration effect is obtained.
  • Fig. 11 is a schematic diagram of training an image restoration network according to training images in an embodiment of the present application.
  • the training process shown in FIG. 11 includes steps 3001 to 3010, which are described in detail below.
  • Step 3001 represents starting to use training images to train the image restoration network and the image generation network.
  • the image restoration network includes a first generation network and a second generation network.
  • the first generation network is used to process synthetic low-quality images to obtain approximate real low-quality images
  • the second generation network is used to perform processing on real low-quality images. Processing to get an approximate composite low-quality image.
  • step 3002 the network parameters of the image generation network and the network parameters of the image restoration network can be initialized separately, where the image generation network and the image restoration network can be initialized sequentially or simultaneously.
  • This application does not limit the initialization sequence of the image generation network and the image restoration network.
  • synthetic low-quality images and real high-quality images can be used to perform preliminary training on the image restoration network first, and the network parameter values of the image restoration network obtained by the preliminary training are used as the initial values of the network parameters of the image restoration network .
  • the plurality of synthesized low-quality images are obtained by respectively performing synthesis processing on a plurality of real high-quality images, and the synthesis processing includes at least one of blurring processing, noise adding processing, and down-sampling processing.
  • Input at least one real low-quality image among the multiple real low-quality images to a second generation network in the image generation network for processing, to obtain at least one approximately synthesized low-quality image.
  • the multiple real low-quality images and the image to be restored in the above step 3004 are acquired by using the same device, and the image to be restored is an image processed by the image restoration network after the training is completed.
  • the same type of equipment mentioned above may refer to equipment of exactly the same model.
  • cameras of the same model For example, cameras of the same model, terminal devices of the same model, cameras of the same model, and so on.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, specifically including: the above-mentioned existing real low-quality image is collected by the first device, and the image to be restored is collected by the second device It is obtained that the device types of the first device and the second device are the same, and the image acquisition parameters of the first device are the same as the image acquisition parameters of the second device, and the image acquisition parameters include at least one of focal length, exposure, and shutter time.
  • the existing real low-quality image and the image to be restored are acquired by using the same type of equipment and using the same image sampling parameters, the existing real low-quality image is closer to the image to be restored, thereby making this application reference
  • the image restoration network obtained by the existing real low-quality image training has better image restoration effects when processing the image to be restored.
  • a cyclic processing process is implemented in the image generation network, that is, the synthesized low-quality image is processed through steps 3003 and 3005 respectively to obtain an approximate real low-quality image, and then the approximate real low-quality image is obtained.
  • the image is processed to obtain a reconstructed composite low-quality image.
  • a cyclic processing process is implemented in the image generation network, that is, through steps 3004 and 3006, the real low-quality image is processed respectively to obtain an approximate composite low-quality image, and then the approximate composite low-quality image is obtained.
  • the image is processed to obtain a reconstructed real low-quality image.
  • the loss function in the above step 3008 includes a first loss term, a second loss term, and a third loss term.
  • the first loss function term and the second loss function term reflect the image loss of the image generation network
  • the third loss function term reflects the image loss of the image restoration network.
  • the first loss function term includes the confrontation loss of at least one approximate real low-quality image relative to any one of the multiple real low-quality images, and at least one approximate synthetic low-quality image relative to multiple synthetic low-quality images. Any one of synthetic low-quality images against loss;
  • the second loss function term includes the difference between the pixel value of at least one reconstructed synthetic low-quality image and the pixel value of at least one synthetic low-quality image, and at least one pixel value of the reconstructed real low-quality image and at least one pixel value of the real low-quality image The difference;
  • the third loss function term includes the mean square error between at least one restored high-quality image and at least one real high-quality image among the multiple real high-quality images, wherein at least one synthesized low-quality image is a comparison of at least one real high-quality image.
  • the image is synthesized by processing.
  • the loss function in the foregoing step 3008 may include a fourth loss function item in addition to the foregoing three loss function items, and the fourth loss function item may include the at least one restored high-quality image relative to all the The perceptual loss and the counter-loss between at least one real high-quality image are described.
  • the aforementioned loss function also includes the fourth loss function item, the information reflected by the loss function is more comprehensive. Therefore, using this loss function during training can train an image loss network with better image recovery performance.
  • the network parameters of the image restoration network and the network parameters of the image generation network may be changed according to the function value of the loss function, so that the function value of the loss function obtained by subsequent calculations is as small as possible.
  • the foregoing image restoration network meets preset requirements, including: the image restoration network meets at least one of the following conditions:
  • step 3010 when the image restoration network meets at least one of the above conditions (1) to (3), it is determined that the image restoration network meets the preset requirements, and step 3011 is executed.
  • the training process of the image restoration network ends, and when the image restoration When the network does not meet any of the above conditions (1) to (3), it means that the image restoration network has not yet met the preset requirements, and it is necessary to continue training the image restoration network, that is, re-execute steps 3003 to 3010 until the preset requirements are met. Set up the required image restoration network.
  • Step 3011 indicates that the image restoration network has met the preset requirements, and the training process of the image restoration network ends.
  • the approximate real low-quality image generated by the image generation network can be closer to the real low-quality image, so that the finally trained image restoration network has a better quality. Good image recovery performance.
  • the method shown in FIG. 11 may further include:
  • the foregoing loss function further includes a fifth loss function term, and the fifth loss function term includes at least one transformed real The difference between the pixel value of the low-quality image and the pixel value of at least one real low-quality image, and the difference between the pixel value of at least one converted synthesized low-quality image and the pixel value of the at least one synthesized low-quality image.
  • the above loss function also includes the fifth loss function item, the information reflected by the loss function is more comprehensive. Therefore, using this loss function during training can train an image loss network with better image recovery performance.
  • the image generation network and the image restoration network are jointly trained. That is to say, the training process shown in Figure 11 can be divided into two stages of training.
  • the first stage is the image
  • the training of the generative network the second stage is the training of the image restoration network.
  • multiple synthetic low-quality images can be input (the multiple synthetic low-quality images may be obtained by combining multiple real high-quality images), after the first After a generation network is processed, an approximately real low-quality image is obtained, and the approximately real low-quality image can be sent to the image restoration network for processing to obtain a restored high-quality image.
  • FIG. 13 is a schematic diagram of the process of determining the loss function of the first stage.
  • the process shown in FIG. 13 includes steps 4001 to 4011, and steps 4001 to 4011 are respectively described in detail below.
  • Step 4001 represents the start of the first phase of training.
  • step 4002 and step 4003 multiple real high-quality images and multiple real low-quality images can be obtained respectively, where the multiple real high-quality images are used for subsequent training of the image restoration network, and the multiple real high-quality images are used for subsequent training of the image restoration network.
  • Low-quality images are used to train the image generation network.
  • step 4004 it is also possible to obtain a synthesized low-quality image by blurring or adding noise to the real high-quality image. Therefore, in step 4004, a variety of low image quality methods can also be used to process the real high-quality image to obtain a synthesized low-quality image.
  • step 4005 The process of generating an approximate real low-quality image in step 4005 is similar to the process of step 3003, and will not be described in detail here.
  • step 4006 The process of generating the reconstructed composite low-quality image in step 4006 is similar to the process of step 3005, and will not be described in detail here.
  • step 4007 The process of generating the converted synthetic low-quality image in step 4007 is similar to the process of step 3012, and will not be described in detail here.
  • step 4008 The process of generating an approximately synthesized low-quality image in step 4008 is similar to the process of step 3004, and will not be described in detail here.
  • step 4009 The process of generating and reconstructing the real low-quality image in step 4009 is similar to the process of step 3006, and will not be described in detail here.
  • step 4010 The process of generating the converted real low-quality image in step 4010 is similar to the process in step 3013, and will not be described in detail here.
  • the loss function of the first stage may include the first loss function and the second loss function term. Further, the above-mentioned loss function of the first stage may further include a fifth loss function term.
  • the process shown in Figure 13 can be continued to obtain the loss function of the second stage. It should be understood that in the training process, the loss function of the first stage and the second stage are determined. The loss functions of the two stages may be performed simultaneously or sequentially. This application does not limit the sequence of the process of determining the loss function of the first stage and the process of determining the loss function of the second stage.
  • FIG. 14 is a schematic diagram of the process of determining the loss function of the second stage.
  • the process shown in FIG. 14 includes steps 5001 to 5004, and steps 5001 to 5004 are respectively described in detail below.
  • Step 5001 represents the start of the first phase of training.
  • the obtaining of the approximate real low-quality image in the foregoing step 5002 can be specifically implemented by obtaining the approximate real low-quality image generated in step 4005 in the first stage.
  • the above-mentioned loss function of the second stage may include the above-mentioned third loss function term. Further, the above-mentioned loss function of the second stage may also include four loss function terms.
  • the loss function of the second stage includes the third loss function item and the fourth loss function item, the information reflected by the second stage loss function is more comprehensive.
  • the process of determining the loss function of the first stage may include the a process of the first stage and the b process of the first stage, and the a process of the first stage and the b process of the first stage may occur simultaneously.
  • a composite low-quality image can be obtained.
  • the synthesized low-quality image is processed through the first generation network to obtain an approximate real low-quality image.
  • the second generation network processes the approximate real low-quality image to obtain a reconstructed composite low-quality image.
  • the loss function of the first stage is determined according to the reconstructed synthetic low-quality image and the synthetic low-quality image.
  • the input real low-quality image and the output approximate real low-quality image are input into the first discriminant network to obtain the discriminant loss, and then the first Part of the loss function of the stage.
  • a synthesized low-quality image can be obtained, and at the same time, the input real low-quality image can be processed through the second generation network to obtain an approximate real low-quality output. image.
  • the output approximate real low-quality image is processed by the first generation network to obtain a reconstructed real low-quality image, and then the loss function of the first stage is determined according to the reconstructed real low-quality image and the real low-quality image.
  • the synthesized low-quality image and the output approximate real low-quality image into the second discriminant network to obtain the discriminant loss, and then obtain the partial loss function of the first stage.
  • an approximately real low-quality image and the loss function of the first stage can be obtained.
  • it can be processed through the image generation network to obtain a high recovery.
  • Quality image, and then the restored high-quality image and the input high-quality image can be input into the image restoration discriminant network to determine the discriminative loss of the restored high-quality image relative to the high-quality image, and the restored high-quality image relative to Image loss of high-quality images, and then calculate the loss function of the second stage.
  • the loss function of the two stages After obtaining the loss function of the first stage and the loss function of the second stage, you can calculate the loss function of the two stages and the gradient of the network parameters of the image restoration network and the image generation network, and determine whether the loss function of the two stages is Convergence, if the loss functions of the two stages converge, the training process is determined to end, and the image restoration network and image generation network are obtained; if the loss functions of the two stages do not converge, it is necessary to continue to update the network of the image restoration network and the image generation network Parameters, and re-execute the first and second stages of the process until the loss functions of the two stages converge.
  • the training method of the image restoration network of the embodiment of the present application is described in detail above with reference to the accompanying drawings.
  • the following describes the image restoration network method of the embodiment of the present application with reference to FIG. 18. It should be understood that the method shown in FIG.
  • the image restoration network may be obtained by training using the training method of the image restoration network of the embodiment of the present application.
  • FIG. 18 is a schematic flowchart of an image restoration network method according to an embodiment of the present application.
  • the method shown in FIG. 18 includes step 6001 and step 6002, and step 6001 and step 6002 are described below.
  • the above-mentioned image to be restored may be an image that requires image restoration processing.
  • 6002. Use an image restoration network to perform restoration processing on the image to be restored to obtain a restored high-quality image.
  • the definition of the image to be restored is generally low.
  • a restored high-quality image with higher definition can be obtained.
  • the image restoration network in the foregoing step 6002 may be obtained by training according to the training method of the image restoration network of the embodiment of the present application.
  • the image restoration network in step 6002 can be obtained by separately training the image restoration network based on multiple training images, or can be obtained by jointly training the image restoration network and the image generation network based on the training images.
  • the image restoration network in the above step 6002 may be obtained by the method shown in FIG. 7, the method shown in FIG. 10, and the method shown in FIG. 11.
  • the approximate real low-quality images contained in the training image pair are obtained by synthesizing and realizing real high-quality images, that is to say, the approximate real low-quality images and the real low-quality images contained in the training image pairs
  • the images are relatively close. Therefore, according to the training of the image restoration network based on multiple training images proposed in this application, an image restoration network with better image restoration effects for real low-quality images can be obtained. Furthermore, the image restoration method of the present application can have a better image restoration effect when using the image restoration network obtained by training to perform image restoration.
  • the above-mentioned image definition includes at least one of the degree of image blur, the distribution of image noise, and the image resolution.
  • the image restoration network in the above step 6002 may be obtained by the training method shown in FIG. 7.
  • the above image restoration network is obtained by training based on multiple training image pairs.
  • Each training image pair in the multiple training image pairs includes a real high-quality image and an approximate real low-quality image.
  • Each training image The approximate real low-quality image in the alignment is obtained by processing the real high-quality image in each training image pair, and the difference between the image sharpness of the approximate real low-quality image and the existing real low-quality image Within the preset range, the image definition includes at least one of image blur degree, image noise distribution, and image resolution.
  • the existing real low-quality image and the image to be restored are collected by the same device.
  • the above-mentioned existing real low-quality image and the image to be restored are collected by the same device, including: the existing real low-quality image is collected by the first device, and the image to be restored is collected by the second device It is obtained that the device types of the first device and the second device are the same, and the image acquisition parameters of the first device are the same as the image acquisition parameters of the second device, and the image acquisition parameters include at least one of focal length, exposure, and shutter time.
  • the existing real low-quality image and the image to be restored are acquired by using the same type of equipment and using the same image sampling parameters, the existing real low-quality image is closer to the image to be restored. Therefore, the image restoration network obtained by referring to the existing real low-quality image training in this application has a better image restoration effect when processing the image to be restored.
  • the following uses a test set to test the image restoration performance of the image restoration network trained by the training method of the image restoration network of the embodiment of the present application.
  • Table 1 shows the performance of the image restoration network obtained by the existing scheme and the scheme of this application on the test sets NTIRE 17 and NTIRE17.
  • the data in the table respectively represent peak signal-noise ratio (PSNR) and structured similarity (SSIM).
  • the existing scheme 1 indicates the direct use of bicubic interpolation (Bicubic) upsampling method to restore the image to be restored;
  • the existing scheme 2 indicates the use of synthetic approximate real low-quality images and real high-quality images to train the image restoration network .
  • Solution 1 (Cycle+SR) of this application is to first perform the first stage of training, after obtaining multiple approximate real low-quality images, then form multiple training image pairs for the second stage of training to obtain an image restoration network;
  • Solution 2 (CycleSR) of this application is to train the image restoration network and the image generation network jointly, and only use the mean square error loss function when training the image restoration network;
  • Scheme 3 of the present application is that the image restoration network and the image generation network are jointly trained, and the mean square error loss function, the perceptual loss function and the counter loss function are used when training the image restoration network.
  • the image restoration network pair obtained by the training method of the image restoration network in the embodiment of the application is tried to be used in the embodiment of the application.
  • Image restoration is performed on old video images with lower image quality.
  • the video image in the new version of "The Legend of the Condor Hero” is acquired as a high-quality image
  • the video image in the old version of "The Legend of the Condor Hero” is acquired as a real low-quality image.
  • the video images of the old version of "The Legend of the Condor Hero” here have more complicated degradation methods. Among them, there are problems caused by limited lighting during shooting, and the physical components of the machine at the time of shooting are not good enough, resulting in image quality. Poor, low resolution and other issues.
  • the real low-quality image in the old version of "The Legend of the Condor Hero” does not have a corresponding high-quality image, so the method of the present invention shows its superiority at this time.
  • the image restoration network is trained by combining the high-quality images in the new version of "The Legend of the Condor Hero" and the real low-quality images in the old version of "The Legend of the Condor Hero". By processing the video images in the Biography, a better image restoration effect can be obtained than existing solutions.
  • Fig. 19 is a schematic block diagram of an image restoration network training device according to an embodiment of the present application.
  • the training device 8000 of the image restoration network shown in FIG. 19 includes an acquisition unit 8001 and a training unit 8002.
  • the acquisition unit 8001 and the training unit 8002 may be used to execute the training method of the image restoration network in the embodiment of the present application.
  • the acquiring unit 8001 may perform the foregoing step 1001
  • the training unit 8002 may perform the foregoing step 1002.
  • training unit 8002 can also be used to perform the various processes shown in FIG. 10 and FIG. 11.
  • the acquisition unit 8001 in the device 8000 shown in FIG. 19 may be equivalent to the communication interface 9003 in the device 9000 shown in FIG. 20, and the corresponding training images can be obtained through the communication interface 9003, or the acquisition unit 8001 may also provide It is equivalent to the processor 9002.
  • the training image can be obtained from the memory 9001 through the processor 9002, or the training image can be obtained from the outside through the communication interface 9003.
  • FIG. 20 is a schematic diagram of the hardware structure of the training device of the image restoration network according to an embodiment of the present application.
  • the training device 9000 of the image restoration network shown in FIG. 20 includes a memory 9001, a processor 9002, a communication interface 9003, and a bus 9004.
  • the memory 9001, the processor 9002, and the communication interface 9003 implement communication connections between each other through the bus 9004.
  • the memory 9001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 9001 may store a program.
  • the processor 9002 is configured to execute each step of the method for training an image restoration network in the embodiment of the present application.
  • the processor 9002 may adopt a general central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), a graphics processing unit (GPU), or one or more
  • the integrated circuit is used to execute related programs to realize the training method of the image restoration network in the method embodiment of the present application.
  • the processor 9002 may also be an integrated circuit chip with signal processing capabilities.
  • each step of the training method of the image restoration network of the present application can be completed by the integrated logic circuit of the hardware in the processor 9002 or the instructions in the form of software.
  • the aforementioned processor 9002 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • Discrete gates or transistor logic devices discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 9001, and the processor 9002 reads the information in the memory 9001, and combines its hardware to complete the functions required by the units included in the training device of the image restoration network, or execute the image restoration network of the method embodiment of the application Training method.
  • the communication interface 9003 uses a transceiver device such as but not limited to a transceiver to implement communication between the device 9000 and other devices or a communication network. For example, the image to be restored can be obtained through the communication interface 9003.
  • a transceiver device such as but not limited to a transceiver to implement communication between the device 9000 and other devices or a communication network. For example, the image to be restored can be obtained through the communication interface 9003.
  • the bus 9004 may include a path for transferring information between various components of the device 9000 (for example, the memory 9001, the processor 9002, and the communication interface 9003).
  • FIG. 21 is a schematic block diagram of an image device according to an embodiment of the present application.
  • the image restoration device 10000 shown in FIG. 21 includes an acquisition unit 10001 and an image restoration unit 10002.
  • the obtaining unit 10001 and the image training unit 10002 may be used to execute the image restoration method of the embodiment of the present application.
  • the acquiring unit 10001 may perform the foregoing step 6001
  • the image restoration unit 10002 may perform the foregoing step 6002.
  • the acquisition unit 10001 in the apparatus 10000 shown in FIG. 21 may be equivalent to the communication interface 11003 in the apparatus 11000 shown in FIG. 22, and the image to be restored can be obtained through the communication interface 11003, or the acquisition unit 10001 may also be equivalent.
  • the image to be restored may be obtained from the memory 11001 through the processor 11002 at this time, or the image to be restored may be obtained from the outside through the communication interface 11003.
  • FIG. 22 is a schematic diagram of the hardware structure of an image restoration device according to an embodiment of the present application. Similar to the above device 10000, the image restoration device 11000 shown in FIG. 22 includes a memory 11001, a processor 11002, a communication interface 11003, and a bus 11004. Among them, the memory 11101, the processor 11102, and the communication interface 11003 implement communication connections between each other through the bus 11004.
  • the memory 11001 may be ROM, static storage device and RAM.
  • the memory 11001 may store a program. When the program stored in the memory 11001 is executed by the processor 11002, the processor 11002 and the communication interface 11003 are used to execute each step of the image restoration method in the embodiment of the present application.
  • the processor 11002 may adopt a general-purpose CPU, a microprocessor, an ASIC, a GPU, or one or more integrated circuits for executing related programs to realize the functions required by the units in the image processing apparatus of the embodiment of the present application. Or execute the image restoration method of the method embodiment of the present application.
  • the processor 11002 may also be an integrated circuit chip with signal processing capabilities.
  • each step of the image restoration method in the embodiment of the present application can be completed by an integrated logic circuit of hardware in the processor 11002 or instructions in the form of software.
  • the aforementioned processor 11002 may also be a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 11001, and the processor 11002 reads the information in the memory 11001, and combines its hardware to complete the functions required by the units included in the image processing apparatus of the embodiment of the present application, or perform the image restoration of the method embodiment of the present application method.
  • the communication interface 11003 uses a transceiving device such as but not limited to a transceiver to implement communication between the device 11000 and other devices or a communication network. For example, the image to be restored can be obtained through the communication interface 11003.
  • a transceiving device such as but not limited to a transceiver to implement communication between the device 11000 and other devices or a communication network. For example, the image to be restored can be obtained through the communication interface 11003.
  • the bus 11004 may include a path for transferring information between various components of the device 11000 (for example, the memory 11001, the processor 11002, and the communication interface 11003).
  • the device 9000 and device 11000 only show the memory, processor, and communication interface, in the specific implementation process, those skilled in the art should understand that the device 9000 and device 11000 may also include those necessary for normal operation. Other devices. At the same time, according to specific needs, those skilled in the art should understand that the device 9000 and the device 11000 may also include hardware devices that implement other additional functions. In addition, those skilled in the art should understand that the device 9000 and the device 11000 may also only include the components necessary to implement the embodiments of the present application, and not necessarily include all the components shown in FIG. 20 and FIG. 22.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

本申请提供了图像恢复方法、图像恢复网络训练方法、装置和存储介质。涉及人工智能领域,具体涉及计算机视觉领域。该方法包括:获取多个训练图像对,对于该多个训练图像对来说,每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,其中,近似真实低质量图像是对该真实高质量图像进行合成处理和真实化处理得到的;接下来,根据该多个训练图像对对图像恢复网络进行训练,直到得到图像恢复性能满足要求的图像恢复网络。由于上述多个训练图像对中的每个训练图像对中均包括一个近似真实低质量图像,因此,根据上述多个训练图像对能够训练出图像恢复性能更好的图像恢复网络。

Description

图像恢复方法、图像恢复网络训练方法、装置和存储介质
本申请要求于2019年09月04日提交中国专利局、申请号为201910834228.3、申请名称为“图像恢复方法、图像恢复网络训练方法、装置和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉领域,并且更具体地,涉及一种图像恢复方法、图像恢复网络训练方法、装置和存储介质。
背景技术
计算机视觉是各个应用领域,如制造业、检验、文档分析、医疗诊断,和军事等领域中各种智能/自主***中不可分割的一部分,它是一门关于如何运用照相机/摄像机和计算机来获取我们所需的,被拍摄对象的数据与信息的学问。形象地说,就是给计算机安装上眼睛(照相机/摄像机)和大脑(算法)用来代替人眼对目标进行识别、跟踪和测量等,从而使计算机能够感知环境。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工***从图像或多维数据中“感知”的科学。总的来说,计算机视觉就是用各种成像***代替视觉器官获取输入信息,再由计算机来代替大脑对这些输入信息完成处理和解释。计算机视觉的最终研究目标就是使计算机能像人那样通过视觉观察和理解世界,具有自主适应环境的能力。
在计算机视觉领域,对于一些已经退化的低质量图像来说,一般需要先对这些低质量图像进行图像恢复以得到高质量图像,然后再对恢复得到的高质量图像进行处理(例如,图像分类,图像识别)。
传统方案进行图像恢复主要包括以下过程:首先,获取一些高质量图像,并在这些高质量图像中添加模糊或者噪声,以得到与原来的高质量图像配对的合成低质量图像;其次,根据这些配对的高质量图像和合成低质量图像对神经网络模型进行训练,以得到训练好的神经网络模型;最后,利用训练好的神经网络模型对输入的低质量图像进行图像恢复,以得到相应的高质量图像。
但是,在真实的低质量图像中,低分辨率、噪声和模糊等退化因素通常耦合在一起。传统方案通过人工合成方式得到的合成低质量图像与真实的低质量图像仍然有很大的差别。导致传统方案中通过配对的高质量图像和合成低质量图像训练得到的神经网络模型并不能很好的进行图像恢复,使得传统方案的图像恢复效果一般。
发明内容
本申请提供一种图像恢复网络的训练方法、图像恢复方法、装置和存储介质,以训练出图像恢复性能更好的图像恢复网络。
第一方面,提供了一种图像恢复网络的训练方法,该方法包括:获取多个训练图像对;根据多个训练图像对对图像恢复网络进行训练,直到图像恢复网络的图像恢复性能满足预设要求。
其中,上述多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,每个训练图像对中的近似真实低质量图像是对每个训练图像对中的真实高质量图像进行处理得到的,上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内。
另外,上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,待恢复图像是训练完成后的图像恢复网络处理的图像。也就是说,上述已有的真实低质量图像与训练完成后的图像恢复网络后续进行图像恢复时处理的待恢复图像是采用同一种设备采集得到的。
上述图像恢复网络也可以称为图像恢复模型,该图像恢复网络可以是一种神经网络。
在根据上述多个训练图像对对图像恢复网络进行训练时,可以逐批采用训练图像对对对图像恢复网络进行训练。每个批次可以采用一个或者多个训练图像对。
上述同一种设备可以是指设备型号完全相同的设备。例如,同一型号的摄像机,同一型号的终端设备,同一型号的照相机等等。
本申请中,由于训练图像对中包含的近似真实低质量图像是对真实高质量图像进行合成处理和真实化处理得到的,也就是说训练图像对中包含的近似真实低质量图像与真实低质量图像比较接近,因此,根据本申请提出的根据多个训练图像对对图像恢复网络进行训练,能够得到对于真实低质量图像具有更好图像恢复效果的图像恢复网络。进而使得后续根据训练完成的图像恢复网络进行图像恢复时能够具有更好的图像恢复效果。
结合第一方面,在第一方面的某些实现方式中,上述根据多个训练图像对对图像恢复网络进行训练,直到图像恢复网络的图像恢复性能满足预设要求,包括:
步骤1:对图像恢复网络的网络参数进行初始化,以得到图像恢复网络的网络参数的初始值;
步骤2:将多个训练图像对中的至少一个训练图像对中的近似真实低质量图像输入到图像恢复网络中进行处理,以得到至少一个恢复的高质量图像;
步骤3:根据至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像的差异确定损失函数的函数值;
步骤4:根据损失函数的函数值对图像恢复网络的网络参数进行更新;
重复上述步骤2至步骤4,直到图像恢复网络满足预设要求。
在上述步骤2中可以对图像恢复网络的网络参数进行随机的初始化,使得图像恢复网络的网络参数随机取到一些数值。另外,在上述步骤2中,也可以采用合成低质量图像和真实高质量图像先对图像恢复网络进行初步的训练,并将初步训练得到的图像恢复网络的网络参数值作为图像恢复网络的网络参数的初始值。
上述步骤3中的损失函数也可以称为图像损失函数。当至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像的差异越大时,图像损失函数的函数值也越大,而当至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像的差异越小时,图像损失函数的函数值也越小。
在上述训练过程中,可以沿着使得损失函数的函数值减少的方向来更改图像恢复网络的网络参数。
可选地,上述图像恢复网络满足预设要求,包括:图像恢复网络满足下列条件中的至少一种:
(1)像恢复网络的图像恢复性能满足预设性能要求;
(2)图像恢复网络的网络参数的更新次数大于或者等于预设次数;
(3)损失函数的函数值小于或者等于预设数值。
具体地,当图像恢复网络满足上述条件(1)至(3)中的至少一个时,图像恢复网络满足预设要求,图像恢复网络的训练过程结束;而当图像恢复网络不满足上述条件(1)至(3)中的任意一个时,说明图像恢复网络尚未满足预设要求,需要继续对图像恢复网络进行继续训练,也就是需要再重复执行上述步骤2至步骤4,直到图像恢复网络满足预设要求。
结合第一方面,在第一方面的某些实现方式中,上述损失函数包括至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像之间的均方误差。
其中,两个图像之间的均方误差是指两个图像之间对应位置的像素值的差别的平方和的均值。
可选地,上述损失函数为至少一个第一损失函数值的平均值,所述至少一个第一损失函数中的每个第一损失函数是所述至少一个恢复的高质量图像中的每个恢复的高质量图像与对应的真实高质量图像之间的均方误差。
结合第一方面,在第一方面的某些实现方式中,上述损失函数还包括至少一个恢复的高质量图像相对于至少一个训练图像对中的真实高质量图像的感知损失和对抗损失。
两个图像之间的感知损失可以是指两个图像的特征图之间对应位置的差别的二范数平方和的均值。
对抗损失用于确定一种图像分布与另一种图像分布是否相似,对抗损失具体可以用判别神经网络来描述。
具体地,当两个图像的图像分布比较相似时,将这两个图像输入到判别神经网络中,该判别神经网络输出的结果是两个图像的图像分布相同,而当两个图像的图像分布相差较大时,将这两个图像输入到判别神经网络中,该判别神经网络输出的结果是两个图像的图像分布不相同。
在确定两种图像的图像分布是否相同或者相似时,可以采用一个判别神经网络来判断。具体地,判别神经网络可以对两种图像分别进行特征提取和特征变换,映射到一个特征空间,然后通过一个二分类的分类器来给出一张图像是合成的概率和是真实的概率,当分类器无法区分一张图像是合成的还是真实的,即一张图像通过分类器输出的其实合成的概率和是真实的概率小于一定阈值时,认为两者无法区分,此时,两种图像的图像分布相同。
当上述损失函数包括均方误差、感知损失以及对抗损失等多种损失时,损失函数反映的信息更加全面,因此,在训练时采用这种损失函数能够训练出图像性能更好的图像损失网络。
结合第一方面,在第一方面的某些实现方式中,上述图像清晰度包括图像模糊程度、 图像噪声分布情况和图像分辨率中的至少一种。
结合第一方面,在第一方面的某些实现方式中,上述每个训练图像对中的近似真实低质量图像是对每个训练图像对中的真实高质量图像进行处理得到的,包括:每个训练图像对中的近似真实低质量图像是采用模糊化处理、加噪声处理和下采样处理中的至少一种对每个训练图像对中的真实高质量图像进行处理得到的。
结合第一方面,在第一方面的某些实现方式中,上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:已有的真实低质量图像是采用第一设备采集得到的,待恢复图像是采用第二设备采集得到的,第一设备与第二设备的设备种类相同,且第一设备的图像采集参数和第二设备的图像采集参数相同,图像采集参数包括焦距,曝光量,快门时间中的至少一种。
当已有的真实低质量图像与待恢复图像是采用同种类型的设备并且采用相同的图像采样参数采集得到时,使得已有的真实低质量图像与待恢复图像更加接近,从而使得本申请参照已有的真实低质量图像训练得到的图像恢复网络处理待恢复图像时具有更好的图像恢复效果。
结合第一方面,在第一方面的某些实现方式中,上述每个训练图像对中的近似真实低质量图像是对上述每个训练图像对中的真实高质量图像进行调整处理得到的,上述调整处理用于对上述每个训练图像对中的真实高质量图像的图像清晰度进行调整,以使得上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度尽可能的相同。
应理解,上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度尽可能的相同,具体可以是指每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内,该预设范围可以根据实际需要来灵活设置。
还应理解,在获取每个训练图像对中的近似真实低质量图像时,可以对上述每个训练图像对中的真实高质量图像的图像清晰度进行调整,使得调整后的图像的图像清晰度与已有的真实低质量图像的清晰度的差异最小,该调整后的图像就是上述每个训练图像中的近似真实低质量图像。
上述对每个训练图像对中的真实高质量图像进行调整处理既可以是直接对每个训练图像对中的真实高质量图像进行模糊化处理、加噪声处理和下采样处理等处理,也可以是采用图像生成网络对每个训练图像对中的真实高质量图像进行处理。
结合第一方面,在第一方面的某些实现方式中,上述每个训练图像对中的近似真实低质量图像是采用预先训练好的图像生成网络对每个训练图像对中的真实高质量图像进行处理得到的。
上述图像生成网络可以用于将真实高质量图像转化为近似真实低质量图像,该图像生成网络可以是根据多个真实高质量图像和多个真实低质量图像进行训练得到的,具体地,在训练过程中可以将多个真实高质量图像输入到图像生成网络中,使得输出的图像的图像清晰度与上述多个真实低质量图像中的任意一个真实低质量图像的图像清晰度的差异尽可能的小。直到训练出来的图像生成网络达到预设要求则结束训练,该结束训练条件可以根据实际情况来灵活设定,例如,当训练得到的图像生成网络的图像处理性能达到预设要 求。
另外,在上述图像生成网络训练过程中,也可以先对多个真实高质量图像进行合成处理,得到多个合成低质量图像,然后再利用多个合成低质量图像和多个真实低质量图像对图像生成网络进行训练。
结合第一方面,在第一方面的某些实现方式中,上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的清晰度差异度在预设范围内,包括:上述每个训练图像对中的近似真实低质量图像的特征向量与已有的真实低质量图像的特征向量之间的距离小于预设距离。
上述预设距离可以根据实际情况来灵活设置。
上述每个训练图像对中的近似真实低质量图像的特征向量可以是判别神经网络(一种神经网络)对该每个训练图像对中的近似真实低质量图像进行特征提取得到的,而上述已有的真实低质量图像的特征向量也可以是根据该判别神经网络对已有的真实低质量图像进行特征提取得到的。
在利用上述判别神经网络进行特征提取分别上述两种图像的特征向量时,可以是相同卷积参数进行提取的,该卷积参数的具体参数值可以是通过对判别神经网络进行训练得到的。
可选地,上述获取多个训练图像对,包括:从初始训练图像集合中确定出该多个训练图像对,其中,该多个训练图像对中的每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在小于预设阈值。
上述预设阈值可以根据实际情况灵活设置,如果训练过程中对图像清晰度之间的差异要求比较严格的话,可以设置一个较小的预设阈值,而训练过程中对图像清晰度之间的差异要求不太严格的话,可以设置一个较大的预设阈值。
通过从初始训练图像集合中选择出一些图像清晰度差异度满足要求的多个训练图像进行训练,可以提高训练效果,使得训练后得到的图像恢复网络具有更好的图像恢复性能。
第二方面,提供了一种图像恢复网络的训练方法,该方法包括以下步骤:
步骤A:对图像生成网络的网络参数和图像恢复网络的网络参数进行初始化,以得到图像生成网络的网络参数的初始值和图像恢复网络的网络参数的初始值;
重复以下步骤B至步骤H,直到图像恢复网络的图像恢复性能满足预设要求;
步骤B:将多个合成低质量图像中的至少一个合成低质量图像输入到图像生成网络中的第一生成网络进行处理,以得到至少一个近似真实低质量图像;
步骤C:将多个真实低质量图像中的至少一个真实低质量图像输入到图像生成网络中的第二生成网络进行处理,得到至少一个近似合成低质量图像;
步骤D:将至少一个近似真实低质量图像输入到第二生成网络中进行处理,得到至少一个重建合成低质量图像;
步骤E:将至少一个近似合成低质量图像输入到第一生成网络中进行处理,得到至少一个重建真实低质量图像;
步骤F:将至少一个近似真实低质量图像输入到图像恢复网络中进行处理,得到至少一个恢复的高质量图像;
步骤G:确定损失函数,损失函数包括第一损失项、第二损失项和第三损失项;
步骤H:根据损失函数的函数值,对图像生成网络的网络和图像恢复网络的网络参数进行更新。
其中,上述步骤B中的多个合成低质量图像是分别对多个真实高质量图像进行合成处理得到的,合成处理包括模糊化处理、加噪声处理和下采样处理中的至少一种。
上述步骤C中的多个真实低质量图像与待恢复图像是采用同一种设备采集得到的,该待恢复图像是训练完成后的图像恢复网络处理的图像。
上述同一种设备可以是指设备型号完全相同的设备。例如,同一型号的摄像机,同一型号的终端设备,同一型号的照相机等等。
上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,具体包括:上述已有的真实低质量图像是采用第一设备采集得到的,待恢复图像是采用第二设备采集得到的,第一设备与第二设备的设备种类相同,且第一设备的图像采集参数和第二设备的图像采集参数相同,图像采集参数包括焦距,曝光量,快门时间中的至少一种。
当已有的真实低质量图像与待恢复图像是采用同种类型的设备并且采用相同的图像采样参数采集得到时,使得已有的真实低质量图像与待恢复图像更加接近,从而使得本申请参照已有的真实低质量图像训练得到的图像恢复网络处理待恢复图像时具有更好的图像恢复效果。
上述第一损失项、第二损失项和第三损失项分别为:
第一损失函数项:
第一损失函数项包括至少一个近似真实低质量图像相对于多个真实低质量图像中的任意一个真实低质量图像的对抗损失,以及至少一个近似合成低质量图像相对于多个合成低质量图像中的任意一个合成低质量图像的对抗损失;
第二损失函数项:
第二损失函数项包括至少一个重建合成低质量图像的像素值与至少一个合成低质量图像的像素值的差异,以及至少一个重建真实低质量图像的像素值与至少一个真实低质量图像的像素值的差异;
第三损失函数项:
第三损失函数项包括至少一个恢复的高质量图像与多个真实高质量图像中的至少一个真实高质量图像之间的均方误差,其中,至少一个合成低质量图像是对至少一个真实高质量图像进行合成处理得到的。
在本申请中,通过对图像生成网络和图像恢复网络进行联合训练,能够使得图像生成网络生成的近似真实低质量图像与真实低质量图像更为接近,从而使得最终训得到的图像恢复网络具有更好的图像恢复性能。
结合第二方面,在第二方面的某些实现方式中,上述损失函数还包括第四损失函数项,第四损失函数项为包括至少一个恢复的高质量图像相对于至少一个真实高质量图像之间的感知损失和对抗损失。
当上述损失函数还包括第四损失函数项时,损失函数反映的信息更加全面,因此,在训练时采用这种损失函数能够训练出图像恢复性能更好的图像损失网络。
结合第二方面,在第二方面的某些实现方式中,上述损失函数还包括第五损失函数项,上述方法还包括:将至少一个合成低质量图像输入到第二生成网络进行处理,得到至少一 个转化的合成低质量图像;将至少一个真实低质量图像输入到第一生成网络进行处理,得到至少一个转化的真实低质量图像;其中,第五损失函数项包括至少一个转化的真实低质量图像的像素值与至少一个真实低质量图像的像素值的差异,以及至少一个转化的合成低质量图像的像素值与至少一个合成低质量图像的像素值的差异。
当上述损失函数还包括第五损失函数项时,损失函数反映的信息更加全面,因此,在训练时采用这种损失函数能够训练出图像恢复性能更好的图像损失网络。
结合第二方面,在第二方面的某些实现方式中,上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:已有的真实低质量图像是采用第一设备采集得到的,待恢复图像是采用第二设备采集得到的,第一设备与第二设备的设备种类相同,且第一设备的图像采集参数和第二设备的图像采集参数相同,图像采集参数包括焦距,曝光量,快门时间中的至少一种。
当已有的真实低质量图像与待恢复图像是采用同种类型的设备并且采用相同的图像采样参数采集得到时,使得已有的真实低质量图像与待恢复图像更加接近,从而使得本申请参照已有的真实低质量图像训练得到的图像恢复网络处理待恢复图像时具有更好的图像恢复效果。
第三方面,提供了一种图像恢复方法,该方法包括:获取待恢复图像;采用图像恢复网络对待恢复图像进行恢复处理,得到恢复的高质量图像,该恢复的高质量图像的图像清晰度高于待恢复图像的图像清晰度。
其中,上述图像恢复网络是根据多个训练图像对进行训练得到的,多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,每个训练图像对中的近似真实低质量图像是对每个训练图像对中的真实高质量图像进行处理得到的,近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内,图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种,已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的。
也就是说,上述第三方面的图像恢复方法采用的可以是上述第一方面的训练方法训练得到的图像恢复网络。
由于训练图像对中包含的近似真实低质量图像是对真实高质量图像进行合成处理和真实化处理得到的,也就是说训练图像对中包含的近似真实低质量图像与真实低质量图像比较接近,因此,根据本申请提出的根据多个训练图像对对图像恢复网络进行训练,能够得到对于真实低质量图像具有更好图像恢复效果的图像恢复网络。进而使得本申请的图像恢复方法利用训练得到的图像恢复网络进行图像恢复时能够具有更好的图像恢复效果。
结合第三方面,在第三方面的某些实现方式中,图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
结合第三方面,在第三方面的某些实现方式中,上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:已有的真实低质量图像是采用第一设备采集得到的,待恢复图像是采用第二设备采集得到的,第一设备与第二设备的设备种类相同,且第一设备的图像采集参数和第二设备的图像采集参数相同,图像采集参数包括焦距,曝光量,快门时间中的至少一种。
当已有的真实低质量图像与待恢复图像是采用同种类型的设备并且采用相同的图像 采样参数采集得到时,使得已有的真实低质量图像与待恢复图像更加接近,从而使得本申请参照已有的真实低质量图像训练得到的图像恢复网络处理待恢复图像时具有更好的图像恢复效果。
第四方面,提供了一种图像恢复方法,该方法包括:获取待恢复图像;采用图像恢复网络对待恢复图像进行恢复处理,得到恢复的高质量图像,该恢复的高质量图像的图像清晰度高于待恢复图像的图像清晰度。
其中,上述第四方面的图像恢复方法中的图像恢复网络是根据第二方面的训练方法训练得到的。
通过对图像生成网络和图像恢复网络进行联合训练,能够使得图像生成网络生成的近似真实低质量图像与真实低质量图像更为接近,从而使得本申请利用联合训练方式得到的图像恢复网络进行图像恢复处理时具有更好的图像恢复性能。
第五方面,提供了一种图像恢复网络的训练装置,该图像恢复网络的训练装置包括用于执行上述第一方面或者第二方面中的方法中的各个模块。
第六方面,提供了一种图像恢复装置,该装置包括用于执行上述第三方面或者第四方面中的方法中的各个模块。
第七方面,提供了一种图像恢复网络的训练装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第一方面或第二方面中的方法。
第八方面,提供了一种图像恢复装置,该装置包括:存储器,用于存储程序;处理器,用于执行所述存储器存储的程序,当所述存储器存储的程序被执行时,所述处理器用于执行上述第三方面或者第四方面中的方法。
第九方面,提供了一种计算机设备,该计算机设备包括上述第四方面中的图像恢复网络的训练装置。
在上述第九方面中,该计算机设备具体可以是服务器或者云端设备等等。
第十方面,提供了一种电子设备,该电子设备包括上述第六方面的图像恢复装置。
在上述第十方面中,电子设备具体可以是移动终端(例如,智能手机),平板电脑,笔记本电脑,增强现实/虚拟现实设备以及车载终端设备等等。
第十一方面,提供一种计算机可读存储介质,该计算机可读存储介质存储有程序代码,该程序代码包括用于执行第一方面、第二方面、第三方面以及第四方面中的任意一种方法中的步骤的指令。
第十二方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面、第二方面、第三方面以及第四方面中的任意一种方法。
第十三方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面、第二方面、第三方面以及第四方面中的任意一种方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行上述第一方面、第二方面、第三方面以及第四方面中的任意一种方法。
上述芯片具体可以是现场可编程门阵列FPGA或者专用集成电路ASIC。
应理解,本申请中,第一方面的方法具体可以是指第一方面以及第一方面中各种实现方式中的任意一种实现方式中的方法,第二方面的方法具体可以是指第二方面以及第二方面中各种实现方式中的任意一种实现方式中的方法,第三方面的方法具体可以是指第三方面以及第三方面中各种实现方式中的任意一种实现方式中的方法。
附图说明
图1是本申请实施例提供的***架构的结构示意图;
图2是利用本申请实施例提供的卷积神经网络模型进行目标检测的示意图;
图3是本申请实施例提供的一种芯片硬件结构示意图;
图4是本申请实施例提供的一种***架构的示意图;
图5是本申请实施例的图像恢复网络的训练方法的示意图;
图6是本申请实施例的图像恢复方法的示意图;
图7是本申请实施例的图像恢复网络的训练方法的示意性流程图;
图8示出了对真实高质量图像进行处理得到近似真实低质量图像的过程;
图9是采用图像生成网络对多个近似真实低质量图像进行处理得到多个合成低质量图像的示意图;
图10是本申请实施例中根据训练图像对对图像恢复网络进行训练的示意图;
图11是本申请实施例的图像恢复网络的训练方法的示意图;
图12是对图像恢复网络和图像生成网络进行联合训练的示意图;
图13是确定第一阶段的损失函数的过程的示意图;
图14是确定第二阶段的损失函数的过程的示意图;
图15是确定第一阶段的损失函数的过程的示意图;
图16是确定第一阶段的损失函数的过程的示意图;
图17是确定第二阶段的损失函数的过程的示意图;
图18是本申请实施例的图像恢复网络方法的示意性流程图;
图19是本申请实施例的图像恢复网络的训练装置的示意性框图;
图20是本申请实施例的图像恢复网络的训练装置的硬件结构示意图;
图21是本申请实施例的图像装置的示意性框图;
图22是本申请实施例的图像恢复装置的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的方案可以应用在辅助驾驶、自动驾驶、平安城市、智能终端等计算机视觉领域中需要进行图像处理(例如,图像分类,图像识别)的领域。(具体地,可以在对低质量图像进行图像分类或者图像识别之前,先对低质量图像进行图像恢复得到高质量图像, 然后再对高质量图像进行图像分类或者图像识别)。
具体地,在使用智能终端(例如,手机)进行拍照,或者使用智能终端、电视以及其他的显示器显示图像时,需要尽可能的消除图像中的噪声,降低图像的模糊程度,并提高图像的分辨率,使得用户可以观看到高清晰度高分辨率的图片。
另外,在安防领域中,监控摄像头拍摄到的图像画质一般比较低,这样会影响人或者识别算法识别目标和判断事件的准确性,因此,需要提高这些图片的分辨率和清晰度,也就是需要对图像进行恢复处理,以便于后续根据恢复后的图像进行准确的判断。
在本申请方案中,可以利用神经网络(模型)进行图像恢复,为了更好地理解本申请方案,下面先对神经网络的相关术语和概念进行介绍。
(1)神经网络
神经网络可以是由神经单元组成的,神经单元可以是指以x s和截距1为输入的运算单元,该运算单元的输出可以如公式(1)所示:
Figure PCTCN2020093142-appb-000001
其中,s=1、2、……n,n为大于1的自然数,W s为x s的权重,b为神经单元的偏置。f为神经单元的激活函数(activation functions),该激活函数用于对神经网络中的特征进行非线性变换,从而将神经单元中的输入信号转换为输出信号。该激活函数的输出信号可以作为下一层卷积层的输入,激活函数可以是sigmoid函数。神经网络是将多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。
(2)深度神经网络
深度神经网络(deep neural network,DNN),也可以称多层神经网络,DNN可以理解为具有多层隐含层的神经网络。按照不同层的位置对DNN进行划分,DNN内部的神经网络可以分为三类:输入层,隐含层,输出层。一般来说第一层是输入层,最后一层是输出层,中间的层数都是隐含层。层与层之间是全连接的,也就是说,第i层的任意一个神经元一定与第i+1层的任意一个神经元相连。
虽然DNN看起来很复杂,但是就每一层的工作来说,其实并不复杂,简单来说就是如下线性关系表达式:
Figure PCTCN2020093142-appb-000002
其中,
Figure PCTCN2020093142-appb-000003
是输入向量,
Figure PCTCN2020093142-appb-000004
是输出向量,
Figure PCTCN2020093142-appb-000005
是偏移向量,W是权重矩阵(也称系数),α()是激活函数。每一层仅仅是对输入向量
Figure PCTCN2020093142-appb-000006
经过如此简单的操作得到输出向量
Figure PCTCN2020093142-appb-000007
由于DNN层数多,系数W和偏移向量
Figure PCTCN2020093142-appb-000008
的数量也比较多。这些参数在DNN中的定义如下所述:以系数W为例,假设在一个三层的DNN中,第二层的第4个神经元到第三层的第2个神经元的线性系数定义为
Figure PCTCN2020093142-appb-000009
上标3代表系数W所在的层数,而下标对应的是输出的第三层索引2和输入的第二层索引4。
综上,第L-1层的第k个神经元到第L层的第j个神经元的系数定义为
Figure PCTCN2020093142-appb-000010
需要注意的是,输入层是没有W参数的。在深度神经网络中,更多的隐含层让网络更能够刻画现实世界中的复杂情形。理论上而言,参数越多的模型复杂度越高,“容量”也就越大,也就意味着它能完成更复杂的学习任务。训练深度神经网络的也就是学习权重矩阵的过程,其最终目的是得到训练好的深度神经网络的所有层的权重矩阵(由很多层的向量W形成的权重矩阵)。
(3)卷积神经网络
卷积神经网络(convolutional neuron network,CNN)是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征抽取器,该特征抽取器可以看作是滤波器。卷积层是指卷积神经网络中对输入信号进行卷积处理的神经元层。在卷积神经网络的卷积层中,一个神经元可以只与部分邻层神经元连接。一个卷积层中,通常包含若干个特征平面,每个特征平面可以由一些矩形排列的神经单元组成。同一特征平面的神经单元共享权重,这里共享的权重就是卷积核。共享权重可以理解为提取图像信息的方式与位置无关。卷积核可以以随机大小的矩阵的形式初始化,在卷积神经网络的训练过程中卷积核可以通过学习得到合理的权重。另外,共享权重带来的直接好处是减少卷积神经网络各层之间的连接,同时又降低了过拟合的风险。
(4)残差网络
残差网络是在2015年提出的一种深度卷积网络,相比于传统的卷积神经网络,残差网络更容易优化,并且能够通过增加相当的深度来提高准确率。残差网络的核心是解决了增加深度带来的副作用(退化问题),这样能够通过单纯地增加网络深度,来提高网络性能。残差网络一般会包含很多结构相同的子模块,通常会采用残差网络(residual network,ResNet)连接一个数字表示子模块重复的次数,比如ResNet50表示残差网络中有50个子模块。
(5)分类器
很多神经网络结构最后都有一个分类器,用于对图像中的物体进行分类。分类器一般由全连接层(fully connected layer)和softmax函数(可以称为归一化指数函数)组成,能够根据输入而输出不同类别的概率。
(6)损失函数
在训练深度神经网络的过程中,因为希望深度神经网络的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层神经网络的权重向量(当然,在第一次更新之前通常会有初始化的过程,即为深度神经网络中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重向量让它预测低一些,不断地调整,直到深度神经网络能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么深度神经网络的训练就变成了尽可能缩小这个loss的过程。
(7)反向传播算法
神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的神经网络模型中参数的数值,使得神经网络模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的神经网络模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的神经网络模型的参数,例如权重矩阵。
以上对神经网络的一些基本内容做了简单介绍,下面针对图像数据处理时可能用到的 一些特定神经网络进行介绍。
下面结合图1对本申请实施例的***架构进行详细的介绍。
图1是本申请实施例的***架构的示意图。如图1所示,***架构100包括执行设备110、训练设备120、数据库130、客户设备140、数据存储***150、以及数据采集***160。
另外,执行设备110包括计算模块111、I/O接口112、预处理模块113和预处理模块114。其中,计算模块111中可以包括目标模型/规则101,预处理模块113和预处理模块114是可选的。
数据采集设备160用于采集训练数据。针对本申请实施例的图像恢复网络的训练方法来说,训练数据可以包括真实的高质量图像和真实的高质量图像对应的近似真实低质量图像。在采集到训练数据之后,数据采集设备160将这些训练数据存入数据库130,训练设备120基于数据库130中维护的训练数据训练得到目标模型/规则101。
下面对训练设备120基于训练数据得到目标模型/规则101进行描述,训练设备120对输入的近似真实高质量图像进行图像恢复,得到恢复的高质量图像,接下来,将恢复的高质量图像与真实的高质量图像进行对比,并根据恢复的高质量图像与真实的高质量图像之间的差异来更新图像恢复网络,直到图像恢复网络满足预设要求,从而完成目标模型/规则101的训练。这里的目标模型/规则101就相当于是图像恢复网络。
上述目标模型/规则101能够用于实现本申请实施例的图像恢复方法,即,将待恢复图像(待恢复图像可以是输入的需要进行图像恢复的低质量图像)输入该目标模型/规则101,即可得到对待恢复图像进行图像恢复处理后得到恢复的高质量图像。本申请实施例中的目标模型/规则101具体可以为神经网络。需要说明的是,在实际应用中,数据库130中维护的训练数据不一定都来自于数据采集设备160的采集,也有可能是从其他设备接收得到的。另外需要说明的是,训练设备120也不一定完全基于数据库130维护的训练数据进行目标模型/规则101的训练,也有可能从云端或其他地方获取训练数据进行模型训练,上述描述不应该作为对本申请实施例的限定。
根据训练设备120训练得到的目标模型/规则101可以应用于不同的***或设备中,如应用于图1所示的执行设备110,所述执行设备110可以是终端,如手机终端,平板电脑,笔记本电脑,增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR),车载终端等,还可以是服务器或者云端等。在图1中,执行设备110配置输入/输出(input/output,I/O)接口112,用于与外部设备进行数据交互,用户可以通过客户设备140向I/O接口112输入数据,所述输入数据在本申请实施例中可以包括:客户设备输入的待恢复图像。这里的客户设备140具体可以是终端设备。
预处理模块113和预处理模块114用于根据I/O接口112接收到的输入数据(如待恢复图像)进行预处理,在本申请实施例中,可以没有预处理模块113和预处理模块114或者只有的一个预处理模块。当不存在预处理模块113和预处理模块114时,可以直接采用计算模块111对输入数据进行处理。
在执行设备110对输入数据进行预处理,或者在执行设备110的计算模块111执行计算等相关的处理过程中,执行设备110可以调用数据存储***150中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储***150中。
最后,I/O接口112将处理结果(具体可以是图像恢复得到的高质量图像),如将目标模型/规则101对待恢复图像进行图像恢复处理得到的恢复的高质量图像呈现给客户设备140,从而提供给用户。
具体地,经过计算模块111中的目标模型/规则101进行图像恢复得到的高质量图像可以通过预处理模块113(也可以再加上预处理模块114的处理)的处理(例如,进行图像渲染处理)后将处理结果送入到I/O接口,再由I/O接口将处理结果送入到客户设备140中显示。
应理解,当上述***架构100中不存在预处理模块113和预处理模块114时,计算模块111还可以将通过图像恢复处理得到的高质量图像传输到I/O接口,然后再由I/O接口将处理结果送入到客户设备140中显示。
值得说明的是,训练设备120可以针对不同的目标或称不同的任务(例如,训练设备可以针对不同场景下真实高质量图像和近似低质量图像进行训练),基于不同的训练数据生成相应的目标模型/规则101,该相应的目标模型/规则101即可以用于实现上述目标或完成上述任务,从而为用户提供所需的结果。
在图1中,用户可以手动给定输入数据(该输入数据可以是待恢复图像),该手动给定可以通过I/O接口112提供的界面进行操作。另一种情况下,客户设备140可以自动地向I/O接口112发送输入数据,如果要求客户设备140自动发送输入数据需要获得用户的授权,则用户可以在客户设备140中设置相应权限。用户可以在客户设备140查看执行设备110输出的结果,具体的呈现形式可以是显示、声音、动作等具体方式。客户设备140也可以作为数据采集端,采集如图所示输入I/O接口112的输入数据及输出I/O接口112的输出结果作为新的样本数据,并存入数据库130。当然,也可以不经过客户设备140进行采集,而是由I/O接口112直接将如图所示输入I/O接口112的输入数据及输出I/O接口112的输出结果,作为新的样本数据存入数据库130。
值得注意的是,图1仅是本申请实施例提供的一种***架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在图1中,数据存储***150相对执行设备110是外部存储器,在其它情况下,也可以将数据存储***150置于执行设备110中。
如图1所示,根据训练设备120训练得到目标模型/规则101,可以是本申请实施例中的神经网络,具体的,本申请实施例提供的神经网络可以是CNN以及深度卷积神经网络(deep convolutional neural networks,DCNN)等等。
由于CNN是一种非常常见的神经网络,下面结合图2重点对CNN的结构进行详细的介绍。如上文的基础概念介绍所述,卷积神经网络是一种带有卷积结构的深度神经网络,是一种深度学习(deep learning)架构,深度学习架构是指通过机器学习的算法,在不同的抽象层级上进行多个层次的学习。作为一种深度学习架构,CNN是一种前馈(feed-forward)人工神经网络,该前馈人工神经网络中的各个神经元可以对输入其中的图像作出响应。
如图2所示,卷积神经网络(CNN)200可以包括输入层210,卷积层/池化层220(其中池化层为可选的),以及全连接层(fully connected layer)230。下面对这些层的相关内容做详细介绍。
卷积层/池化层220:
卷积层:
如图2所示卷积层/池化层220可以包括如示例221-226层,举例来说:在一种实现中,221层为卷积层,222层为池化层,223层为卷积层,224层为池化层,225为卷积层,226为池化层;在另一种实现方式中,221、222为卷积层,223为池化层,224、225为卷积层,226为池化层。即卷积层的输出可以作为随后的池化层的输入,也可以作为另一个卷积层的输入以继续进行卷积操作。
下面将以卷积层221为例,介绍一层卷积层的内部工作原理。
卷积层221可以包括很多个卷积算子,卷积算子也称为核,其在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素……这取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。该权重矩阵的大小应该与图像的大小相关,需要注意的是,权重矩阵的纵深维度(depth dimension)和输入图像的纵深维度是相同的,在进行卷积运算的过程中,权重矩阵会延伸到输入图像的整个深度。因此,和一个单一的权重矩阵进行卷积会产生一个单一纵深维度的卷积化输出,但是大多数情况下不使用单一权重矩阵,而是应用多个尺寸(行×列)相同的权重矩阵,即多个同型矩阵。每个权重矩阵的输出被堆叠起来形成卷积图像的纵深维度,这里的维度可以理解为由上面所述的“多个”来决定。不同的权重矩阵可以用来提取图像中不同的特征,例如一个权重矩阵用来提取图像边缘信息,另一个权重矩阵用来提取图像的特定颜色,又一个权重矩阵用来对图像中不需要的噪点进行模糊化等。该多个权重矩阵尺寸(行×列)相同,经过该多个尺寸相同的权重矩阵提取后的卷积特征图的尺寸也相同,再将提取到的多个尺寸相同的卷积特征图合并形成卷积运算的输出。
这些权重矩阵中的权重值在实际应用中需要经过大量的训练得到,通过训练得到的权重值形成的各个权重矩阵可以用来从输入图像中提取信息,从而使得卷积神经网络200进行正确的预测。
当卷积神经网络200有多个卷积层的时候,初始的卷积层(例如221)往往提取较多的一般特征,该一般特征也可以称之为低级别的特征;随着卷积神经网络200深度的加深,越往后的卷积层(例如226)提取到的特征越来越复杂,比如高级别的语义之类的特征,语义越高的特征越适用于待解决的问题。
池化层:
由于常常需要减少训练参数的数量,因此卷积层之后常常需要周期性的引入池化层,在如图2中220所示例的221-226各层,可以是一层卷积层后面跟一层池化层,也可以是多层卷积层后面接一层或多层池化层。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化层可以包括平均池化算子和/或最大池化算子,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。另外,就像卷积层中用权重矩阵的大小应该与图像尺寸相关一样,池化层中的运算符也应该与图像的大小相关。通过池化层处理后输出的图像尺寸可以小于 输入池化层的图像的尺寸,池化层输出的图像中每个像素点表示输入池化层的图像的对应子区域的平均值或最大值。
全连接层230:
在经过卷积层/池化层220的处理后,卷积神经网络200还不足以输出所需要的输出信息。因为如前所述,卷积层/池化层220只会提取特征,并减少输入图像带来的参数。然而为了生成最终的输出信息(所需要的类信息或其他相关信息),卷积神经网络200需要利用全连接层230来生成一个或者一组所需要的类的数量的输出。因此,在全连接层230中可以包括多层隐含层(如图2所示的231、232至23n)以及输出层240,该多层隐含层中所包含的参数可以根据具体的任务类型的相关训练数据进行预先训练得到,例如该任务类型可以包括图像识别,图像分类,图像超分辨率重建等等。
在全连接层230中的多层隐含层之后,也就是整个卷积神经网络200的最后层为输出层240,该输出层240具有类似分类交叉熵的损失函数,具体用于计算预测误差,一旦整个卷积神经网络200的前向传播(如图2由210至240方向的传播为前向传播)完成,反向传播(如图2由240至210方向的传播为反向传播)就会开始更新前面提到的各层的权重值以及偏差,以减少卷积神经网络200的损失,及卷积神经网络200通过输出层输出的结果和理想结果之间的误差。
需要说明的是,如图2所示的卷积神经网络200仅作为一种卷积神经网络的示例,在具体的应用中,卷积神经网络还可以以其他网络模型的形式存在。
应理解,可以采用图2所示的卷积神经网络(CNN)200执行本申请实施例的图像恢复方法,如图2所示,待恢复图像经过输入层210、卷积层/池化层220和全连接层230的处理之后可以恢复得到高质量图像。
图3为本申请实施例提供的一种芯片硬件结构,该芯片包括神经网络处理器50。该芯片可以被设置在如图1所示的执行设备110中,用以完成计算模块111的计算工作。该芯片也可以被设置在如图1所示的训练设备120中,用以完成训练设备120的训练工作并输出目标模型/规则101。如图2所示的卷积神经网络中各层的算法均可在如图3所示的芯片中得以实现。
神经网络处理器(neural-network processing unit,NPU)50作为协处理器挂载到主中央处理器(central processing unit,CPU)(host CPU)上,由主CPU分配任务。NPU的核心部分为运算电路503,控制器504控制运算电路503提取存储器(权重存储器或输入存储器)中的数据并进行运算。
在一些实现中,运算电路503内部包括多个处理单元(process engine,PE)。在一些实现中,运算电路503是二维脉动阵列。运算电路503还可以是一维脉动阵列或者能够执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路503是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路503从权重存储器502中取矩阵B相应的数据,并缓存在运算电路503中每一个PE上。运算电路503从输入存储器501中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)508中。
向量计算单元507可以对运算电路503的输出做进一步处理,如向量乘,向量加, 指数运算,对数运算,大小比较等等。例如,向量计算单元507可以用于神经网络中非卷积/非FC层的网络计算,如池化(pooling),批归一化(batch normalization),局部响应归一化(local response normalization)等。
在一些实现中,向量计算单元能507将经处理的输出的向量存储到统一缓存器506。例如,向量计算单元507可以将非线性函数应用到运算电路503的输出,例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元507生成归一化的值、合并值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路503的激活输入,例如用于在神经网络中的后续层中的使用。
统一存储器506用于存放输入数据以及输出数据。
权重数据直接通过存储单元访问控制器505(direct memory access controller,DMAC)将外部存储器中的输入数据搬运到输入存储器501和/或统一存储器506、将外部存储器中的权重数据存入权重存储器502,以及将统一存储器506中的数据存入外部存储器。
总线接口单元(bus interface unit,BIU)510,用于通过总线实现主CPU、DMAC和取指存储器509之间进行交互。
与控制器504连接的取指存储器(instruction fetch buffer)509,用于存储控制器504使用的指令;
控制器504,用于调用指存储器509中缓存的指令,实现控制该运算加速器的工作过程。
一般地,统一存储器506,输入存储器501,权重存储器502以及取指存储器509均为片上(on-chip)存储器,外部存储器为该NPU外部的存储器,该外部存储器可以为双倍数据率同步动态随机存储器(double data rate synchronous dynamic random access memory,简称DDR SDRAM)、高带宽存储器(high bandwidth memory,HBM)或其他可读可写的存储器。
另外,在本申请中,图2所示的卷积神经网络中各层的运算可以由运算电路503或向量计算单元507执行。
如图4所示,本申请实施例提供了一种***架构300。该***架构包括本地设备301、本地设备302以及执行设备210和数据存储***250,其中,本地设备301和本地设备302通过通信网络与执行设备210连接。
执行设备210可以由一个或多个服务器实现。可选的,执行设备210可以与其它计算设备配合使用,例如:数据存储器、路由器、负载均衡器等设备。执行设备210可以布置在一个物理站点上,或者分布在多个物理站点上。执行设备210可以使用数据存储***250中的数据,或者调用数据存储***250中的程序代码来实现本申请实施例的图像恢复方法。
用户可以操作各自的用户设备(例如本地设备301和本地设备302)与执行设备210进行交互。每个本地设备可以表示任何计算设备,例如个人计算机、计算机工作站、智能手机、平板电脑、智能摄像头、智能汽车或其他类型蜂窝电话、媒体消费设备、可穿戴设备、机顶盒、游戏机等。
每个用户的本地设备可以通过任何通信机制/通信标准的通信网络与执行设备210进行交互,通信网络可以是广域网、局域网、点对点连接等方式,或它们的任意组合。
在一种实现方式中,本地设备301、本地设备302从执行设备210获取到图像恢复网络的网络参数,将图像恢复网络部署在本地设备301、本地设备302上,利用该图像恢复网络进行图像恢复。
在另一种实现中,执行设备210上可以直接部署图像恢复网络,执行设备210通过从本地设备301和本地设备302获取待恢复图像(本地设备301和本地设备302可以将图待恢复图像上传给执行设备210),并根据图像恢复网络对待恢复图像进行图像恢复,并将图像恢复得到的高质量图像发送给本地设备301和本地设备302。
上述执行设备210也可以称为云端设备,此时执行设备210一般部署在云端。
下面结合附图对本申请实施例进行详细的介绍。
图5是本申请实施例的图像恢复网络的训练方法的示意图。
如图5所示,在获取到多个训练图像对之后,可以根据该多个训练图像对对图像恢复网络进行训练,以得到训练好的图像恢复网络。该训练好的图像恢复网络可以用于进行图像恢复,从而将输入的低质量图像转化为高质量图像,提高图像的显示效果。
在本申请中,图像恢复网络也可以称为图像恢复模型,该图像恢复网络可以是一种神经网络。
图6是本申请实施例的图像恢复方法的示意图。
如图6所示,图像恢复网络可以对待恢复图像(一般是低质量图像)进行恢复处理,以得到恢复的高质量图像。图6中的图像恢复网络可以是采用图5所示的图像恢复网络的训练方法训练得到的。
下面结合图7先对本申请实施例的图像恢复网络的训练方法进行详细的介绍。
图7是本申请实施例的图像恢复网络的训练方法的示意性流程图。图7所示的方法可以由本申请实施例中的图像恢复网络的训练装置来执行,图7所示的方法包括步骤1001和1002,下面分别对步骤1001和步骤1002进行详细的介绍。
1001、获取多个训练图像对。
上述多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像。
对于上述每个训练图像对来说,其中的近似真实低质量图像是对该训练图像对中的真实高质量图像进行处理得到的,并且,每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内。
上述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
可选地,上述每个训练图像对中的近似真实低质量图像是对上述每个训练图像对中的真实高质量图像进行调整处理得到的,上述调整处理用于对上述每个训练图像对中的真实高质量图像的图像清晰度进行调整,以使得上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度尽可能的相同。
应理解,上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度尽可能的相同,具体可以是指每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内,该预设范围可以根据实际需要来灵活设置。
可选地,在得到每个训练图像对中的近似真实低质量图像时有两种方式,下面对这两 种方式进行介绍。
第一种方式:通过模糊化处理、加噪声处理和下采样处理等处理得到近似真实低质量图像。
具体地,在第一种方式下,上述对每个训练图像对中的真实高质量图像进行调整处理既可以是直接对每个训练图像对中的真实高质量图像进行模糊化处理、加噪声处理和下采样处理等处理。
具体地,上述每个训练图像对中的近似真实低质量图像是对该训练图像对中的真实高质量图像进行合成处理和真实化处理得到的。
图8示出了对真实高质量图像进行处理得到近似真实低质量图像的过程。
具体地,如图8所示,对于每个训练图像对来说,可以先通过对该图像对中的真实高质量图像进行合成处理,得到合成低质量图像,然后再对该合成低质量图像进行真实化处理,从而得到该图像对中的近似真实低质量图像。
其中,上述合成处理可以包括模糊化处理、加噪声处理和下采样处理中的至少一种,上述真实化处理用于对图像清晰度进行调整,以使得调整后的图像的图像清晰度与已有的真实低质量图像的图像清晰度尽可能的相同。
另外,上述已有的真实低质量图像与训练后的图像恢复网络后续进行图像恢复时处理的待恢复图像是采用同一种设备采集得到的。也就是说,上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,该待恢复图像是训练完成后的图像恢复网络处理的图像。
第二种方式:采用图像生成网络进行处理方式获取近似真实低质量图像。
在第二种方式下,上述每个训练图像对中的近似真实低质量图像是采用预先训练好的图像生成网络对每个训练图像对中的真实高质量图像进行处理得到的。
上述图像生成网络可以用于将真实高质量图像转化为近似真实低质量图像,该图像生成网络可以是根据多个真实高质量图像和多个真实低质量图像进行训练得到的,具体地,在训练过程中可以将多个真实高质量图像输入到图像生成网络中,使得输出的图像的图像清晰度与上述多个真实低质量图像中的任意一个真实低质量图像的图像清晰度的差异尽可能的小。直到训练出来的图像生成网络达到预设要求再结束训练,该结束训练条件可以根据实际情况来灵活设定,例如,当训练得到的图像生成网络的图像处理性能达到预设要求。
另外,在上述图像生成网络训练过程中,也可以先对多个真实高质量图像进行合成处理,得到多个合成低质量图像,然后再利用多个合成低质量图像和多个真实低质量图像对图像生成网络进行训练。
可选地,上述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的清晰度差异度在预设范围内,包括:上述每个训练图像对中的近似真实低质量图像的特征向量与已有的真实低质量图像的特征向量之间的距离小于预设距离。
上述预设距离可以根据实际情况来灵活设置。
上述每个训练图像对中的近似真实低质量图像的特征向量可以是判别神经网络(也可以称为判别器)对该每个训练图像对中的近似真实低质量图像进行特征提取得到的,而上 述已有的真实低质量图像的特征向量也可以是根据该判别神经网络对已有的真实低质量图像进行特征提取得到的。
在利用上述判别神经网络进行特征提取分别上述两种图像的特征向量时,可以是相同卷积参数进行提取的,该卷积参数的具体参数值可以是通过对判别神经网络进行训练得到的。
可选地,上述获取多个训练图像对,包括:从初始训练图像集合中确定出该多个训练图像对,其中,该多个训练图像对中的每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在小于预设阈值。
上述预设阈值可以根据实际情况灵活设置,如果训练过程中对图像清晰度之间的差异要求比较严格的话,可以设置一个较小的预设阈值,而训练过程中对图像清晰度之间的差异要求不太严格的话,可以设置一个较大的预设阈值。
例如,初始训练图像集合包括100个训练图像对,那么,可以根据从中选择40个训练图像对,该40个训练图像对中的每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在小于预设阈值。
通过从初始训练图像集合中选择出一些图像清晰度差异度满足要求的多个训练图像进行训练,可以提高训练效果,使得训练后得到的图像恢复网络具有更好的图像恢复性能。
图9是采用图像生成网络对多个近似真实低质量图像进行处理得到多个合成低质量图像的示意图。
如图9所示,上述多个训练图像对中的多个近似真实低质量图像可以是通过图像生成网络对多个合成低质量图像进行处理得到的。具体地,图9所示的图像生成网络包括第一生成网络和第二生成网络,利用第一生成网络可以将多个合成低质量图像转化为多个近似真实低质量图像。
图9所示的图像生成网络具体可以是具有循环一致性的对抗网络(cycle-consistent adversarial networks,CycleGAN),无监督的图像到图像转换网络(unsupervised image-to-image trainslation network,UNIT),用于多域图像到图像转换的统一生成对抗网络(unified generative adversarial networks for multi-domain image-to-image translation,StarGAN)和多种图像到图像的转换(diverse image-to-image translation,DRIT)等网络。
1002、根据多个训练图像对对图像恢复网络进行训练,直到图像恢复网络的图像恢复性能满足预设要求。
本申请中,由于训练图像对中包含的近似真实低质量图像是对真实高质量图像进行合成处理和真实化处理得到的,也就是说训练图像对中包含的近似真实低质量图像与真实低质量图像比较接近,因此,本申请提出的根据多个训练图像对对图像恢复网络进行训练,能够得到对于真实低质量图像具有更好图像恢复效果的图像恢复网络。
下面结合图10对上述步骤1002中根据多个训练图像对对图像恢复网络进行训练的过程进行详细描述。
图10是本申请实施例中根据训练图像对对图像恢复网络进行训练的示意图。图10所示的方法相当于图7所示的方法中的步骤1002,图10所示的训练过程包括步骤2001至2007,下面对这些步骤进行详细的介绍。
2001、开始。
步骤2001表示开始采用多个训练图像对对图像恢复网络进行训练,图像恢复网络的训练过程开始。
2002、对图像恢复网络的网络参数进行初始化,得到图像恢复网络的网络参数的初始值。
上述步骤2002中可以对图像恢复网络的网络参数进行随机的初始化,使得图像恢复网络的网络参数随机取一些数值。
另外,在上述步骤2002中,也可以采用合成低质量图像和真实高质量图像先对图像恢复网络进行初步的训练,并将初步训练得到的图像恢复网络的网络参数值作为图像恢复网络的网络参数的初始值,然后在后续的步骤中进行正式的训练。
2003、将至少一个训练图像对中的近似真实低质量图像输入到图像恢复网络中进行处理,以得到至少一个恢复的高质量图像。
应理解,在上述步骤2003中,每次训练可以采用至少一个训练图像对,也就是说,在训练过程中每次既可以将一个训练图像对中的一个近似真实低质量图像输入到图像恢复网络中进行处理,以得到一个恢复的高质量图像,也可以在训练过程将多个训练图像对中的多个近似真实低质量图像输入到图像恢复网络中进行处理,以分别得到多个恢复的高质量图像。本申请对每次输入到图像恢复网络中进行处理的图像的个数不做限定。
2004、根据至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像的差异确定损失函数的函数值。
具体地,在步骤2004中,当只存在一个恢复的高质量图像和一个真实高质量图像时,可以根据该恢复的高质量图像与该真实高质量图像的差异确定损失函数的函数值,该损失函数的函数值与该恢复的高质量图像与该真实高质量图像的差异为正相关关系。
具体地,当至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像的差异越大时,图像损失函数的函数值也越大,而当至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像的差异越小时,图像损失函数的函数值也越小。
在上述步骤2004中,当存在多个恢复的高质量图像和多个真实高质量图像时,可以根据该多个恢复的高质量图像与各自对应的真实高质量图像的差异来确定各自对应的损失函数的函数值,然后将各自对应的损失函数的函数值进行求和或者求平均以得到步骤2004最终的损失函数的函数值。
可选地,上述损失函数的函数值包括至少一个恢复的高质量图像与至少一个训练图像对中的真实高质量图像之间的均方误差(mean squared error,MSE)。
其中,两个图像之间的均方误差是指两个图像之间对应位置的像素值的差别的平方和的均值。
也就是说,上述损失函数的函数值包括至少一个恢复的高质量图像相对于至少一个训练图像对中的真实高质量图像之间的均方误差损失(MSE loss)
可选地,上述损失函数为至少一个第一损失函数值的平均值,所述至少一个第一损失函数中的每个第一损失函数是所述至少一个恢复的高质量图像中的每个恢复的高质量图像与对应的真实高质量图像之间的均方误差。
例如,上述至少一个恢复的高质量图像仅包括恢复的高质量图像A,上述至少一个训练图像对中的真实高质量图像仅包括真实高质量图像A’,那么,上述损失函数的函数值 就可以是恢复的高质量图像A的像素值与真实高质量图像A’的像素值的差值的平方。
再如,上述至少一个恢复的高质量图像包括恢复的高质量图像A和恢复的高质量图像B,上述至少一个训练图像对中的真实高质量图像包括真实高质量图像A’和真实高质量图像B’,那么,上述损失函数的函数值就可以是第一差值的平方与第二差值的平方的和或者平均值,其中,第一差值为恢复的高质量图像A的像素值与真实高质量图像A’的像素值的差值,第二差值为恢复的高质量图像B的像素值与真实高质量图像B’的像素值的差值。
可选地,上述损失函数还包括至少一个恢复的高质量图像相对于至少一个训练图像对中的真实高质量图像的感知损失和对抗损失。
两个图像之间的感知损失可以是指两个图像的特征图之间对应位置的差别的二范数平方和的均值。
对抗损失一般用于确定一种图像分布与另一种图像分布是否相似,对抗损失具体可以用判别神经网络来描述。
具体地,当两个图像的图像分布比较相似时,将这两个图像输入到判别神经网络中,该判别神经网络输出的结果是两个图像的图像分布相同,而当两个图像的图像分布相差较大时,将这两个图像输入到判别神经网络中,该判别神经网络输出的结果是两个图像的图像分布不相同。
2005、根据损失函数的函数值对图像恢复网络的网络参数进行更新。
具体地,在步骤2005中,可以根据损失函数的函数值对图像恢复网络的网络参数进行更改,以使得后续计算得到的损失函数的函数值尽可能的小。
2006、确定图像恢复网络是否满足预设要求。
可选地,上述图像恢复网络满足预设要求,包括:图像恢复网络满足下列条件中的至少一种:
(1)像恢复网络的图像恢复性能满足预设性能要求;
(2)图像恢复网络的网络参数的更新次数大于或者等于预设次数;
(3)损失函数的函数值小于或者等于预设数值。
在步骤2006中,当图像恢复网络满足上述条件(1)至(3)中的至少一个时,确定图像恢复网络满足预设要求,执行步骤2007,图像恢复网络的训练过程结束,而当图像恢复网络不满足上述条件(1)至(3)中的任意一个时,说明图像恢复网络尚未满足预设要求,需要继续对图像恢复网络进行继续训练,也就是重新执行步骤2002至2006,直到得到满足预设要求的图像恢复网络。
2007、训练结束。
步骤2007表示图像恢复网络已经满足预设要求,图像恢复网络的训练过程结束。
在本申请中,除了根据已经获取到的训练图像对单独对图像恢复网络进行训练之外,还可以将图像恢复网络与近似真实低质量图像的生成网络进行联合训练,通过联合训练可以约束第一阶段得到的近似真实低质量图像的生成质量,使得其与真实低质量图像尽量相似,从而最终得到图像恢复效果更好的图像恢复网络。
下面结合图11对图像恢复网络和图像生成网络联合进行训练的方式进行详细的描述。
图11是本申请实施例中根据训练图像对对图像恢复网络进行训练的示意图。图11所示的训练过程包括步骤3001至3010,下面对这些步骤进行详细的介绍。
3001、开始。
步骤3001表示开始采用训练图像对图像恢复网络和图像生成网络进行训练。其中,图像恢复网络包括第一生成网络和第二生成网络,第一生成网络用于对合成低质量图像进行处理,以得到近似真实低质量图像,第二生成网络用于对真实低质量图像进行处理,以得到近似合成低质量图像。
3002、对图像生成网络的网络参数和图像恢复网络的网络参数进行初始化,得到图像生成网络的网络参数的初始值和图像恢复网络的网络参数的初始值。
应理解,在步骤3002中,可以分别对图像生成网络的网络参数和图像恢复网络的网络参数进行初始化处理,其中,对图像生成网络和图像恢复网络可以先后进行初始化处理,也可以同时进行初始化处理,本申请对图像生成网络和图像恢复网络的初始化的顺序不做限定。
在上述步骤3002中,可以采用合成低质量图像和真实高质量图像先对图像恢复网络进行初步的训练,并将初步训练得到的图像恢复网络的网络参数值作为图像恢复网络的网络参数的初始值。
3003、将多个合成低质量图像中的至少一个合成低质量图像输入到图像生成网络中的第一生成网络进行处理,以得到至少一个近似真实低质量图像。
其中,上述多个合成低质量图像是分别对多个真实高质量图像进行合成处理得到的,合成处理包括模糊化处理、加噪声处理和下采样处理中的至少一种。
3004、将多个真实低质量图像中的至少一个真实低质量图像输入到图像生成网络中的第二生成网络进行处理,得到至少一个近似合成低质量图像。
上述步骤3004中的多个真实低质量图像与待恢复图像是采用同一种设备采集得到的,该待恢复图像是训练完成后的图像恢复网络处理的图像。
上述同一种设备可以是指设备型号完全相同的设备。例如,同一型号的摄像机,同一型号的终端设备,同一型号的照相机等等。
上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,具体包括:上述已有的真实低质量图像是采用第一设备采集得到的,待恢复图像是采用第二设备采集得到的,第一设备与第二设备的设备种类相同,且第一设备的图像采集参数和第二设备的图像采集参数相同,图像采集参数包括焦距,曝光量,快门时间中的至少一种。
当已有的真实低质量图像与待恢复图像是采用同种类型的设备并且采用相同的图像采样参数采集得到时,使得已有的真实低质量图像与待恢复图像更加接近,从而使得本申请参照已有的真实低质量图像训练得到的图像恢复网络处理待恢复图像时具有更好的图像恢复效果。
3005、将至少一个近似真实低质量图像输入到第二生成网络中进行处理,得到至少一个重建合成低质量图像。
在上述步骤3003和3005中,在图像生成网络中实现了一个循环处理过程,也就是通过步骤3003和3005分别对合成低质量图像进行处理,得到近似真实低质量图像,然后再对近似真实低质量图像进行处理,得到重建合成低质量图像。接下来,可以比较重建合成 低质量图像与合成低质量图像的差异,并且在训练过程中通过调整图像生成网络的网络参数来使得该差异逐渐变小。
3006、将至少一个近似合成低质量图像输入到第一生成网络中进行处理,得到至少一个重建真实低质量图像。
在上述步骤3004和3006中,在图像生成网络中实现了一个循环处理过程,也就是通过步骤3004和3006分别对真实低质量图像进行处理,得到近似合成低质量图像,然后再对近似合成低质量图像进行处理,得到重建真实低质量图像。接下来,可以比较重建真实低质量图像与真实低质量图像的差异,并且在训练过程中通过调整图像生成网络的网络参数来使得该差异逐渐变小。
3007、将至少一个近似真实低质量图像输入到图像恢复网络中进行处理,得到至少一个恢复的高质量图像。
3008、确定损失函数。
上述步骤3008中的损失函数包括第一损失项、第二损失项和第三损失项。其中,第一损失函数项和第二损失函数项反映的是图像生成网络的图像损失,第三损失函数项反映的是图像恢复网络的图像损失。这三个损失函数项的具体含义如下:
第一损失函数项:
第一损失函数项包括至少一个近似真实低质量图像相对于多个真实低质量图像中的任意一个真实低质量图像的对抗损失,以及至少一个近似合成低质量图像相对于多个合成低质量图像中的任意一个合成低质量图像的对抗损失;
第二损失函数项:
第二损失函数项包括至少一个重建合成低质量图像的像素值与至少一个合成低质量图像的像素值的差异,以及至少一个重建真实低质量图像的像素值与至少一个真实低质量图像的像素值的差异;
第三损失函数项:
第三损失函数项包括至少一个恢复的高质量图像与多个真实高质量图像中的至少一个真实高质量图像之间的均方误差,其中,至少一个合成低质量图像是对至少一个真实高质量图像进行合成处理得到的。
可选地,上述步骤3008中的损失函数除包含上述三个损失函数项之外,还可以包含第四损失函数项,该第四损失函数项包括所述至少一个恢复的高质量图像相对于所述至少一个真实高质量图像之间的感知损失和对抗损失。
当上述损失函数还包括第四损失函数项时,损失函数反映的信息更加全面,因此,在训练时采用这种损失函数能够训练出图像恢复性能更好的图像损失网络。
3009、根据损失函数的函数值,对图像生成网络的网络和图像恢复网络的网络参数进行更新。
具体地,在步骤3009中,可以根据损失函数的函数值对图像恢复网络的网络参数和图像生成网络的网络参数进行更改,以使得后续计算得到的损失函数的函数值尽可能的小。
3010、确定图像恢复网络是否满足预设要求。
可选地,上述图像恢复网络满足预设要求,包括:图像恢复网络满足下列条件中的至 少一种:
(1)图像恢复网络的图像恢复性能满足预设性能要求;
(2)图像恢复网络的网络参数的更新次数大于或者等于预设次数;
(3)损失函数的函数值小于或者等于预设数值。
在步骤3010中,当图像恢复网络满足上述条件(1)至(3)中的至少一个时,确定图像恢复网络满足预设要求,执行步骤3011,图像恢复网络的训练过程结束,而当图像恢复网络不满足上述条件(1)至(3)中的任意一个时,说明图像恢复网络尚未满足预设要求,需要继续对图像恢复网络进行训练,也就是重新执行步骤3003至3010,直到得到满足预设要求的图像恢复网络。
3011、训练结束。
步骤3011表示图像恢复网络已经满足预设要求,图像恢复网络的训练过程结束。
在本申请中,通过对图像生成网络和图像恢复网络进行联合训练,能够使得图像生成网络生成的近似真实低质量图像与真实低质量图像更为接近,从而使得最终训得到的图像恢复网络具有更好的图像恢复性能。
可选地,上述图11所示的方法还可以包括:
3012、将至少一个合成低质量图像输入到所述第二生成网络进行处理,得到至少一个转化的合成低质量图像。
3013、将至少一个真实低质量图像输入到第一生成网络进行处理,得到至少一个转化的真实低质量图像。
上述步骤3012和步骤3013可以发生在步骤3008之前,当图11所示的方法包括步骤3012和3013时,上述损失函数还包括第五损失函数项,该第五损失函数项包括至少一个转化的真实低质量图像的像素值与至少一个真实低质量图像的像素值的差异,以及至少一个转化的合成低质量图像的像素值与所述至少一个合成低质量图像的像素值的差异。
当上述损失函数还包括第五损失函数项时,损失函数反映的信息更加全面,因此,在训练时采用这种损失函数能够训练出图像恢复性能更好的图像损失网络。
在上述图11所示的训练过程中,图像生成网络和图像恢复网络时联合进行训练的,也就是说,图11所示的训练过程可以分为两个阶段的训练,第一个阶段是图像生成网络的训练,第二个阶段是图像恢复网络的训练。
如图12所示,在对图像生成网络进行训练时,可以输入多个合成低质量图像(该多个合成低质量图像可以是对多个真实高质量图像进行合成处理得到的),在经过第一生成网络处理后得到近似真实低质量图像,该近似真实低质量图像可以送入到图像恢复网络中进行处理,得到恢复的高质量图像。
接下来,就可以分别计算第一阶段和第二阶段的损失函数,然后将第一阶段的损失函数和第二阶段的损失函数相加,得到图像恢复网络和图像生成网络训练过程中的总的损失函数,然后根据该总的损失函数来继续调整图像恢复网络和图像生成网络的网络参数。
为了更好地理解上述第一阶段和第二阶段的训练过程,下面结合附图分别对第一阶段和第二阶段的训练过程进行详细描述。
图13是确定第一阶段的损失函数的过程的示意图。
图13所示的过程包括步骤4001至4011,下面分别对步骤4001至步骤4011进行详 细的介绍。
4001、开始。
步骤4001表示第一阶段的训练开始。
4002、获取真实高质量图像。
4003、获取真实低质量图像。
具体地,在步骤4002和步骤4003中可以分别获取多个真实高质量图像和多个真实低质量图像,其中,该多个真实高质量图像用于后续对图像恢复网络进行训练,该多个真实低质量图像用于对图像生成网络进行训练。
4004、对真实高质量图像进行下采样处理,得到合成低质量图像。
在步骤4004中,还可以通过对真实高质量图像进行模糊化处理或者加噪声处理来得到合成低质量图像。因此,在步骤4004中,还可以采用多种将低图像质量的方式对真实高质量图像进行处理,得到合成低质量图像。
4005、采用第一生成网络对合成低质量图像进行处理,得到近似真实低质量图像。
步骤4005中生成近似真实低质量图像的过程与步骤3003的过程类似,这里不再详细描述。
4006、采用第二生成网络对近似真实低质量图像进行处理,得到重建合成低质量图像。
步骤4006中生成重建合成低质量图像的过程与步骤3005的过程类似,这里不再详细描述。
4007、采用第二生成网络对合成低质量图像进行处理,得到转化的合成低质量图像。
步骤4007中生成转化的合成低质量图像的过程与步骤3012的过程类似,这里不再详细描述。
4008、采用第二生成网络对真实低质量图像进行处理,得到近似合成低质量图像。
步骤4008中生成近似合成低质量图像的过程与步骤3004的过程类似,这里不再详细描述。
4009、采用第一生成网络对近似真实低质量图像进行处理,得到重建真实低质量图像。
步骤4009中生成重建真实低质量图像的过程与步骤3006的过程类似,这里不再详细描述。
4010、采用第一生成网络对真实低质量图像进行处理,得到转化的真实低质量图像。
步骤4010中生成转化的真实低质量图像的过程与步骤3013的过程类似,这里不再详细描述。
4011、确定第一阶段的损失函数。
上述第一阶段的损失函数可以包括上述第一损失函数和第二损失函数项。进一步的,上述第一阶段的损失函数还可以包括第五损失函数项。
在步骤4011确定了第一阶段的损失函数之后,可以继续执行图13所示的过程,以得到第二阶段的损失函数,应理解,在训练过程中,确定第一阶段的损失函数和确定第二阶段的损失函数可以是同步进行的,也可以是先后进行的,本申请对确定第一阶段的损失函数和确定第二阶段的损失函数的过程的先后顺序不做限定。
图14是确定第二阶段的损失函数的过程的示意图。
图14所示的过程包括步骤5001至5004,下面分别对步骤5001至步骤5004进行详 细的介绍。
5001、开始。
步骤5001表示第一阶段的训练开始。
5002、获取近似真实低质量图像。
上述步骤5002中获取近似真实低质量图像具体可以通过获取第一阶段中的步骤4005中生成的近似真实低质量图像来实现。
5003、采用图像恢复网络对近似真实低质量图像进行处理,得到恢复的高质量图像。
5004、确定第二阶段的损失函数。
上述第二阶段的损失函数可以包括上述第三损失函数项。进一步的,上述第二阶段的损失函数还可以包括四损失函数项。
当第二阶段的损失函数包括第三损失函数项和第四损失函数项时,第二阶段的损失函数反映的信息比较全面。
下面再结合图15和图16对第一阶段的损失函数的确定过程进行说明。
确定第一阶段的损失函数的过程可以包括第一阶段的a过程和第一阶段的b过程,第一阶段的a过程和第一阶段的b过程可以同时发生。
如图15所示,通过对输入的高质量图像进行下采样处理,可以得到合成低质量图像。接下来,再通过第一生成网络对合成低质量图像进行处理,得到近似真实的低质量图像。接下来,再由第二生成网络对近似真实低质量图像进行处理,得到重建合成低质量图像。然后根据重建合成低质量图像和合成低质量图像确定第一阶段的损失函数,将输入的真实低质量图像和输出的近似真实低质量图像输入到第一判别网络中得到判别损失,进而得到第一阶段的部分损失函数。
如图16所示,通过对输入的高质量图像进行下采样处理,可以得到合成低质量图像,同时还可以通过第二生成网络对输入的真实低质量图像进行处理,得到输出的近似真实低质量图像。接下来,再由第一生成网络对输出的近似真实低质量图像进行处理,得到重建真实低质量图像,然后再根据重建真实低质量图像和真实低质量图像确定第一阶段的损失函数,另外,还可以将合成低质量图像和输出的近似真实低质量图像输入到的第二判别网络中,得到判别损失,进而得到第一阶段的部分损失函数。
应理解,上述图15所示的第一阶段的a过程与图16所示的第一阶段的b过程可以是同时发生的。上述图15或者图16仅仅确定了第一阶段的部分损失函数,图15所示的第一阶段的a过程确定下来的损失函数与图16所示的第一阶段的b过程确定下来的损失函数的和就是最终的第一阶段的损失函数。
在执行完了第一阶段之后,可以执行第二阶段,计算出第二阶段的损失函数,然后根据第一阶段和第二阶段的损失函数之和对图像恢复网络和图像生成网络进行网络参数更新,以得到最终的图像恢复网络。
如图17所示,经过第一阶段可以得到近似真实的低质量图像以及第一阶段的损失函数,对于近似真实的低质量图像来说,可以通过图像生成网络对其进行处理,得到恢复的高质量图像,然后可以将恢复的高质量图像和输入的高质量图像输入到图像恢复判别网络中判断恢复的高质量图像相对于高质量图像的判别损失,并且还可以计算恢复的高质量图像相对于高质量图像的图像损失,进而计算出第二阶段的损失函数。在得到第一阶段的损 失函数和第二阶段的损失函数之后,可以计算两个阶段的损失函数之后以及图像恢复网络和图像生成网络的网络参数的梯度,并确定两个阶段的损失函数之后是否收敛,如果两个阶段的损失函数收敛的话,则确定训练过程结束,得到图像恢复网络和图像生成网络;如果两个阶段的损失函数不收敛的话则需要继续更新图像恢复网络和图像生成网络的网络参数,并重新执行第一阶段和第二阶段的过程,直到两个阶段的损失函数收敛。
上文结合附图对本申请实施例的图像恢复网络的训练方法进行了详细的说明,下面结合图18对本申请实施例的图像恢复网络方法进行描述,应理解,图18所示的方法中采用的图像恢复网络可以是采用本申请实施例的图像恢复网络的训练方法进行训练得到的。
图18是本申请实施例的图像恢复网络方法的示意性流程图。图18所示的方法包括步骤6001和步骤6002,下面对步骤6001和步骤6002进行描述。
6001、获取待恢复图像。
上述待恢复图像可以是需要进行图像恢复处理的图像。
6002、采用图像恢复网络对所述待恢复图像进行恢复处理,得到恢复的高质量图像。
一般来说,待恢复图像的清晰度一般比较低,通过对待恢复图像进行恢复处理,能够得到清晰度更高的恢复的高质量图像。
上述步骤6002中的图像恢复网络可以是根据本申请实施例的图像恢复网络的训练方法进行训练得到的。
具体地,上述步骤6002中的图像恢复网络既可以是根据多个训练图像对对图像恢复网络进行单独训练得到的,也可以是根据训练图像对图像恢复网络和图像生成网络进行联合训练得到的。
例如,上述步骤6002中的图像恢复网络可以是通过图7所示的方法,图10所示的方法以及图11所示的方法训练得到的。
本申请中,由于训练图像对中包含的近似真实低质量图像是对真实高质量图像进行合成处理和真实化处理得到的,也就是说训练图像对中包含的近似真实低质量图像与真实低质量图像比较接近,因此,根据本申请提出的根据多个训练图像对对图像恢复网络进行训练,能够得到对于真实低质量图像具有更好图像恢复效果的图像恢复网络。进而使得本申请的图像恢复方法利用训练得到的图像恢复网络进行图像恢复时能够具有更好的图像恢复效果。
可选地,上述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
具体地,上述步骤6002中的图像恢复网络可以是图7所示的训练方法进行训练得到的。
也就是说,上述图像恢复网络是根据多个训练图像对进行训练得到的,多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,每个训练图像对中的近似真实低质量图像是对每个训练图像对中的真实高质量图像进行处理得到的,近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内,其中,图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种,已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的。
其中,上述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括: 已有的真实低质量图像是采用第一设备采集得到的,待恢复图像是采用第二设备采集得到的,第一设备与第二设备的设备种类相同,且第一设备的图像采集参数和第二设备的图像采集参数相同,图像采集参数包括焦距,曝光量,快门时间中的至少一种。
本申请中,当已有的真实低质量图像与待恢复图像是采用同种类型的设备并且采用相同的图像采样参数采集得到时,使得已有的真实低质量图像与待恢复图像更加接近,从而使得本申请参照已有的真实低质量图像训练得到的图像恢复网络处理待恢复图像时具有更好的图像恢复效果。
为了对本申请实施例的图像恢复网络的训练方法的效果进行评估,下面采用测试集对本申请实施例的图像恢复网络的训练方法训练得到的图像恢复网络的图像恢复性能进行测试。
表1示出了现有方案和本申请方案得到的图像恢复网络在测试集NTIRE 17和NTIRE17上的表现。其中,表格中的数据分别表示峰值信噪比(peak signal-noise ratio,PSNR)和结构化相似性(structured similarity,SSIM)。
表1
Figure PCTCN2020093142-appb-000011
其中,现有方案1表示直接采用双三次插值(Bicubic)上采样方法对待恢复图像进行恢复;现有方案2表示采用合成的近似真实低质量图像和真实高质量图像对图像恢复网络进行训练的方案。
本申请方案1(Cycle+SR)是先进行第一阶段的训练,在得到多个近似真实低质量图像之后,再组成多个训练图像对进行第二阶段的训练,以得到图像恢复网络;
本申请方案2(CycleSR)是图像恢复网络和图像生成网络联合进行训练,并且在对图像恢复网络进行训练时仅采用均方误差损失函数;
本申请方案3(CycleSRGAN)是图像恢复网络和图像生成网络联合进行训练,并且在对图像恢复网络进行训练时采用均方误差损失函数、感知损失函数和对抗损失函数。
由表1可知,在不同测试集上,本申请的三个方案的测试效果比现有方案的测试效果更好,具体地,本申请的三个方案相对于现有的两个方案的PSNR和SSIM都有所提高。
此外,为了验证本申请实施例的图像恢复网络的训练方法得到的图像恢复网络的图像恢复性能,本申请实施例中尝试采用了本申请实施例的图像恢复网络的训练方法得到的图像恢复网络对图像质量较低的老视频图像进行图像恢复。
具体地,获取新版《射雕英雄传》中的视频图像作为高质量图像,获取老版《射雕英雄传》中的视频图像作为真实低质量图像。这里的老版《射雕英雄传》的视频图像相较于高质量图像其退化方式比较复杂,其中既存在拍摄时采光受限引起的问题,也存在当时拍 摄时机器物理元件不够出色导致成像质量差、分辨率低等问题。在这个实例中,老版《射雕英雄传》中的真实低质量图像是没有对应的高质量图像的,因此本发明的方法在此时显示了其优越性。
通过结合新版《射雕英雄传》中的高质量图像以及老版《射雕英雄传》中的真实低质量图像对图像恢复网络进行训练,利用该得到的图像恢复网络对老版《射雕英雄传》中的视频图像进行处理,能够得到比现有方案更好的图像恢复效果。
图19是本申请实施例的图像恢复网络的训练装置的示意性框图。图19所示的图像恢复网络的训练装置8000包括获取单元8001和训练单元8002。
获取单元8001和训练单元8002可以用于执行本申请实施例的图像恢复网络的训练方法。
具体地,获取单元8001可以执行上述步骤1001,训练单元8002可以执行上述步骤1002。
另外,上述训练单元8002还可以用于执行图10和图11所示的各个过程。
上述图19所示的装置8000中的获取单元8001可以相当于图20所示的装置9000中的通信接口9003,通过该通信接口9003可以获得相应的训练图像,或者,上述获取单元8001也可以提相当于处理器9002,此时可以通过处理器9002从存储器9001中获取训练图像,或者通过通信接口9003从外部获取训练图像。
图20是本申请实施例的图像恢复网络的训练装置的硬件结构示意图。图20所示的图像恢复网络的训练装置9000(该装置9000具体可以是一种计算机设备)包括存储器9001、处理器9002、通信接口9003以及总线9004。其中,存储器9001、处理器9002、通信接口9003通过总线9004实现彼此之间的通信连接。
存储器9001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器9001可以存储程序,当存储器9001中存储的程序被处理器9002执行时,处理器9002用于执行本申请实施例的图像恢复网络的训练方法的各个步骤。
处理器9002可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC),图形处理器(graphics processing unit,GPU)或者一个或多个集成电路,用于执行相关程序,以实现本申请方法实施例的图像恢复网络的训练方法。
处理器9002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请的图像恢复网络的训练方法的各个步骤可以通过处理器9002中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器9002还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程 存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器9001,处理器9002读取存储器9001中的信息,结合其硬件完成本图像恢复网络的训练装置中包括的单元所需执行的功能,或者执行本申请方法实施例的图像恢复网络的训练方法。
通信接口9003使用例如但不限于收发器一类的收发装置,来实现装置9000与其他设备或通信网络之间的通信。例如,可以通过通信接口9003获取待恢复图像。
总线9004可包括在装置9000各个部件(例如,存储器9001、处理器9002、通信接口9003)之间传送信息的通路。
图21是本申请实施例的图像装置的示意性框图。图21所示的图像恢复装置10000包括获取单元10001和图像恢复单元10002。
获取单元10001和图像训练单元10002可以用于执行本申请实施例的图像恢复方法。
具体地,获取单元10001可以执行上述步骤6001,图像恢复单元10002可以执行上述步骤6002。
上述图21所示的装置10000中的获取单元10001可以相当于图22所示的装置11000中的通信接口11003,通过该通信接口11003可以获得待恢复图像,或者,上述获取单元10001也可以提相当于处理器11002,此时可以通过处理器11002从存储器11001中获取待恢复图像,或者通过通信接口11003从外部获取待恢复图像。
图22是本申请实施例的图像恢复装置的硬件结构示意图。与上述装置10000类似,图22所示的图像恢复装置11000包括存储器11001、处理器11002、通信接口11003以及总线11004。其中,存储器11001、处理器11002、通信接口11003通过总线11004实现彼此之间的通信连接。
存储器11001可以是ROM,静态存储设备和RAM。存储器11001可以存储程序,当存储器11001中存储的程序被处理器11002执行时,处理器11002和通信接口11003用于执行本申请实施例的图像恢复方法的各个步骤。
处理器11002可以采用通用的,CPU,微处理器,ASIC,GPU或者一个或多个集成电路,用于执行相关程序,以实现本申请实施例的图像处理装置中的单元所需执行的功能,或者执行本申请方法实施例的图像恢复方法。
处理器11002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本申请实施例的图像恢复方法的各个步骤可以通过处理器11002中的硬件的集成逻辑电路或者软件形式的指令完成。
上述处理器11002还可以是通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器11001,处理器11002读取存储器11001中的信息,结合其硬件完成本申请实施例的图像处理装置中包括的单元所需执行的功能,或者执行本申请方法实施例的图像恢复方法。
通信接口11003使用例如但不限于收发器一类的收发装置,来实现装置11000与其他 设备或通信网络之间的通信。例如,可以通过通信接口11003获取待恢复图像。
总线11004可包括在装置11000各个部件(例如,存储器11001、处理器11002、通信接口11003)之间传送信息的通路。
应注意,尽管上述装置9000和装置11000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,装置9000和装置11000还可以包括实现正常运行所必须的其他器件。同时,根据具体需要,本领域的技术人员应当理解,装置9000和装置11000还可包括实现其他附加功能的硬件器件。此外,本领域的技术人员应当理解,装置9000和装置11000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图20和图22中所示的全部器件。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (34)

  1. 一种图像恢复网络的训练方法,其特征在于,包括:
    获取多个训练图像对,所述多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,其中,所述每个训练图像对中的近似真实低质量图像是对所述每个训练图像对中的真实高质量图像进行处理得到的,所述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内;
    根据所述多个训练图像对对所述图像恢复网络进行训练,直到所述图像恢复网络的图像恢复性能满足预设要求,其中,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,所述待恢复图像是训练完成后的所述图像恢复网络处理的图像。
  2. 如权利要求1所述的训练方法,其特征在于,所述根据多个训练图像对对图像恢复网络进行训练,直到所述图像恢复网络的图像恢复性能满足预设要求,包括:
    步骤1:对所述图像恢复网络的网络参数进行初始化,得到所述图像恢复网络的网络参数的初始值;
    步骤2:将所述多个训练图像对中的至少一个训练图像对中的近似真实低质量图像输入到所述图像恢复网络中进行处理,以得到至少一个恢复的高质量图像;
    步骤3:根据所述至少一个恢复的高质量图像与所述至少一个训练图像对中的真实高质量图像的差异确定损失函数的函数值;
    步骤4:根据所述损失函数的函数值对所述图像恢复网络的网络参数进行更新;
    重复上述步骤2至步骤4,直到所述图像恢复网络的图像恢复性能满足预设要求。
  3. 如权利要求2所述的训练方法,其特征在于,所述损失函数包括所述至少一个恢复的高质量图像与所述至少一个训练图像对中的真实高质量图像之间的均方误差。
  4. 如权利要求3所述的训练方法,其特征在于,所述损失函数还包括所述至少一个恢复的高质量图像相对于所述至少一个训练图像对中的真实高质量图像的感知损失和对抗损失。
  5. 如权利要求1-4中任一项所述的训练方法,其特征在于,所述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
  6. 如权利要求1-5中任一项所述的训练方法,其特征在于,所述每个训练图像对中的近似真实低质量图像是对所述每个训练图像对中的真实高质量图像进行处理得到的,包括:
    所述每个训练图像对中的近似真实低质量图像是采用模糊化处理、加噪声处理和下采样处理中的至少一种对所述每个训练图像对中的真实高质量图像进行处理得到的。
  7. 如权利要求1-6中任一项所述的训练方法,其特征在于,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:
    所述已有的真实低质量图像是采用第一设备采集得到的,所述待恢复图像是采用第二设备采集得到的,所述第一设备与所述第二设备的设备种类相同,且所述第一设备的图像采集参数和所述第二设备的图像采集参数相同,所述图像采集参数包括焦距,曝光量,快 门时间中的至少一种。
  8. 如权利要求1-7中任一项所述的训练方法,其特征在于,所述每个训练图像对中的近似真实低质量图像是采用预先训练好的图像生成网络对所述每个训练图像对中的真实高质量图像进行处理得到的。
  9. 一种图像恢复网络的训练方法,其特征在于,包括:
    步骤A:对图像生成网络的网络参数和图像恢复网络的网络参数进行初始化,以得到所述图像生成网络的网络参数的初始值和所述图像恢复网络的网络参数的初始值;
    步骤B:将多个合成低质量图像中的至少一个合成低质量图像输入到图像生成网络中的第一生成网络进行处理,以得到至少一个近似真实低质量图像,其中,所述多个合成低质量图像是分别对多个真实高质量图像进行合成处理得到的,所述合成处理包括模糊化处理、加噪声处理和下采样处理中的至少一种;
    步骤C:将多个真实低质量图像中的至少一个真实低质量图像输入到所述图像生成网络中的第二生成网络进行处理,得到至少一个近似合成低质量图像,其中,所述多个真实低质量图像与待恢复图像是采用同一种设备采集得到的,所述待恢复图像是训练完成后的所述图像恢复网络处理的图像;
    步骤D:将所述至少一个近似真实低质量图像输入到所述第二生成网络中进行处理,得到至少一个重建合成低质量图像;
    步骤E:将所述至少一个近似合成低质量图像输入到所述第一生成网络中进行处理,得到至少一个重建真实低质量图像;
    步骤F:将所述至少一个近似真实低质量图像输入到所述图像恢复网络中进行处理,得到至少一个恢复的高质量图像;
    步骤G:确定损失函数,所述损失函数包括第一损失项、第二损失项和第三损失项;
    其中,所述第一损失函数项包括所述至少一个近似真实低质量图像相对于所述多个真实低质量图像中的任意一个真实低质量图像的对抗损失,以及所述至少一个近似合成低质量图像相对于所述多个合成低质量图像中的任意一个合成低质量图像的对抗损失;
    所述第二损失函数项包括所述至少一个重建合成低质量图像的像素值与所述至少一个合成低质量图像的像素值的差异,以及所述至少一个重建真实低质量图像的像素值与所述至少一个真实低质量图像的像素值的差异;
    所述第三损失函数项包括所述至少一个恢复的高质量图像与所述多个真实高质量图像中的至少一个真实高质量图像之间的均方误差,其中,所述至少一个合成低质量图像是对所述至少一个真实高质量图像进行所述合成处理得到的;
    步骤H:根据所述损失函数的函数值,对所述图像生成网络的网络和所述图像恢复网络的网络参数进行更新;
    重复上述步骤B至步骤H,直到所述图像恢复网络的图像恢复性能满足预设要求。
  10. 如权利要求9所述的训练方法,其特征在于,所述损失函数还包括第四损失函数项,所述第四损失函数项为包括所述至少一个恢复的高质量图像相对于所述至少一个真实高质量图像之间的感知损失和对抗损失。
  11. 如权利要求9或10所述的训练方法,其特征在于,所述方法还包括:
    将所述至少一个合成低质量图像输入到所述第二生成网络进行处理,得到至少一个转 化的合成低质量图像;
    将所述至少一个真实低质量图像输入到所述第一生成网络进行处理,得到至少一个转化的真实低质量图像;
    所述损失函数还包括第五损失函数项,所述第五损失函数项包括所述至少一个转化的真实低质量图像的像素值与所述至少一个真实低质量图像的像素值的差异,以及所述至少一个转化的合成低质量图像的像素值与所述至少一个合成低质量图像的像素值的差异。
  12. 如权利要求9-11中任一项所述的训练方法,其特征在于,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:
    所述已有的真实低质量图像是采用第一设备采集得到的,所述待恢复图像是采用第二设备采集得到的,所述第一设备与所述第二设备的设备种类相同,且所述第一设备的图像采集参数和所述第二设备的图像采集参数相同,所述图像采集参数包括焦距,曝光量,快门时间中的至少一种。
  13. 一种图像恢复方法,其特征在于,包括:
    获取待恢复图像;
    采用图像恢复网络对所述待恢复图像进行恢复处理,得到恢复的高质量图像,所述恢复的高质量图像的图像清晰度高于所述待恢复图像的图像清晰度;
    其中,所述图像恢复网络是根据多个训练图像对进行训练得到的,所述多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,所述每个训练图像对中的近似真实低质量图像是对所述每个训练图像对中的真实高质量图像进行处理得到的,所述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内,所述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的。
  14. 如权利要求13所述的图像恢复方法,其特征在于,所述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
  15. 如权利要求13或14所述的图像恢复方法,其特征在于,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:
    所述已有的真实低质量图像是采用第一设备采集得到的,所述待恢复图像是采用第二设备采集得到的,所述第一设备与所述第二设备的设备种类相同,且所述第一设备的图像采集参数和所述第二设备的图像采集参数相同,所述图像采集参数包括焦距,曝光量,快门时间中的至少一种。
  16. 一种图像恢复网络的训练装置,其特征在于,包括:
    获取单元,用于获取多个训练图像对,所述多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,其中,所述每个训练图像对中的近似真实低质量图像是对所述每个训练图像对中的真实高质量图像进行处理得到的,所述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内;
    训练单元,用于根据所述多个训练图像对对所述图像恢复网络进行训练,直到所述图像恢复网络的图像恢复性能满足预设要求,其中,所述已有的真实低质量图像与待恢复图 像是采用同一种设备采集得到的,所述待恢复图像是训练完成后的所述图像恢复网络处理的图像。
  17. 如权利要求16所述的训练装置,其特征在于,所述训练单元用于执行以下步骤:
    步骤1:对所述图像恢复网络的网络参数进行初始化,得到所述图像恢复网络的网络参数的初始值;
    步骤2:将所述多个训练图像对中的至少一个训练图像对中的近似真实低质量图像输入到所述图像恢复网络中进行处理,以得到至少一个恢复的高质量图像;
    步骤3:根据所述至少一个恢复的高质量图像与所述至少一个训练图像对中的真实高质量图像的差异确定损失函数的函数值;
    步骤4:根据所述损失函数的函数值对所述图像恢复网络的网络参数进行更新;
    重复执行上述步骤2至步骤4,直到所述图像恢复网络的图像恢复性能满足预设要求。
  18. 如权利要求17所述的训练装置,其特征在于,所述损失函数包括所述至少一个恢复的高质量图像与所述至少一个训练图像对中的真实高质量图像之间的均方误差。
  19. 如权利要求18所述的训练装置,其特征在于,所述损失函数还包括所述至少一个恢复的高质量图像相对于所述至少一个训练图像对中的真实高质量图像的感知损失和对抗损失。
  20. 如权利要求16-19中任一项所述的训练装置,其特征在于,所述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
  21. 如权利要求16-20中任一项所述的训练装置,其特征在于,所述每个训练图像对中的近似真实低质量图像是对所述每个训练图像对中的真实高质量图像进行处理得到的,包括:
    所述每个训练图像对中的近似真实低质量图像是采用模糊化处理、加噪声处理和下采样处理中的至少一种对所述每个训练图像对中的真实高质量图像进行处理得到的。
  22. 如权利要求16-21中任一项所述的训练装置,其特征在于,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:
    所述已有的真实低质量图像是采用第一设备采集得到的,所述待恢复图像是采用第二设备采集得到的,所述第一设备与所述第二设备的设备种类相同,且所述第一设备的图像采集参数和所述第二设备的图像采集参数相同,所述图像采集参数包括焦距,曝光量,快门时间中的至少一种。
  23. 如权利要求16-21中任一项所述的训练装置,其特征在于,所述每个训练图像对中的近似真实低质量图像是采用预先训练好的图像生成网络对所述每个训练图像对中的真实高质量图像进行处理得到的。
  24. 一种图像恢复网络的训练装置,其特征在于,包括:
    初始化单元,用于执行步骤A;
    步骤A:对图像生成网络的网络参数和图像恢复网络的网络参数进行初始化,得到所述图像生成网络的网络参数的初始值和所述图像恢复网络的网络参数的初始值;
    训练单元,用于重复步骤B至步骤H,直到所述图像恢复网络的图像恢复性能满足预设要求;
    步骤B:将多个合成低质量图像中的至少一个合成低质量图像输入到图像生成网络中 的第一生成网络进行处理,以得到至少一个近似真实低质量图像,其中,所述多个合成低质量图像是分别对多个真实高质量图像进行合成处理得到的,所述合成处理包括模糊化处理、加噪声处理和下采样处理中的至少一种;
    步骤C:将多个真实低质量图像中的至少一个真实低质量图像输入到所述图像生成网络中的第二生成网络进行处理,得到至少一个近似合成低质量图像,其中,所述多个真实低质量图像与待恢复图像是采用同一种设备采集得到的,所述待恢复图像是训练完成后的所述图像恢复网络处理的图像;
    步骤D:将所述至少一个近似真实低质量图像输入到所述第二生成网络中进行处理,得到至少一个重建合成低质量图像;
    步骤E:将所述至少一个近似合成低质量图像输入到所述第一生成网络中进行处理,得到至少一个重建真实低质量图像;
    步骤F:将所述至少一个近似真实低质量图像输入到所述图像恢复网络中进行处理,得到至少一个恢复的高质量图像;
    步骤G:确定损失函数,所述损失函数包括第一损失项、第二损失项和第三损失项;
    所述第一损失函数项包括所述至少一个近似真实低质量图像相对于所述多个真实低质量图像中的任意一个真实低质量图像的对抗损失,以及所述至少一个近似合成低质量图像相对于所述多个合成低质量图像中的任意一个合成低质量图像的对抗损失;
    所述第二损失函数项包括所述至少一个重建合成低质量图像的像素值与所述至少一个合成低质量图像的像素值的差异,以及所述至少一个重建真实低质量图像的像素值与所述至少一个真实低质量图像的像素值的差异;
    所述第三损失函数项包括所述至少一个恢复的高质量图像与所述多个真实高质量图像中的至少一个真实高质量图像之间的均方误差,其中,所述至少一个合成低质量图像是对所述至少一个真实高质量图像进行所述合成处理得到的;
    步骤H:根据所述损失函数的函数值,对所述图像生成网络的网络和所述图像恢复网络的网络参数进行更新。
  25. 如权利要求24所述的训练装置,其特征在于,所述损失函数还包括第四损失函数项,所述第四损失函数项为包括所述至少一个恢复的高质量图像相对于所述至少一个真实高质量图像之间的感知损失和对抗损失。
  26. 如权利要求24或25所述的训练装置,其特征在于,所述损失函数包括第五损失函数项,所述训练单元还用于重复执行步骤I和步骤J:
    步骤I:将所述至少一个合成低质量图像输入到所述第二生成网络进行处理,得到至少一个转化的合成低质量图像;
    步骤J:将所述至少一个真实低质量图像输入到所述第一生成网络进行处理,得到至少一个转化的真实低质量图像;
    其中,所述第五损失函数项包括所述至少一个转化的真实低质量图像的像素值与所述至少一个真实低质量图像的像素值的差异,以及所述至少一个转化的合成低质量图像的像素值与所述至少一个合成低质量图像的像素值的差异。
  27. 如权利要求24-26中任一项所述的训练装置,其特征在于,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:
    所述已有的真实低质量图像是采用第一设备采集得到的,所述待恢复图像是采用第二设备采集得到的,所述第一设备与所述第二设备的设备种类相同,且所述第一设备的图像采集参数和所述第二设备的图像采集参数相同,所述图像采集参数包括焦距,曝光量,快门时间中的至少一种。
  28. 一种图像恢复装置,其特征在于,包括:
    获取单元,用于获取待恢复图像;
    图像恢复单元,用于采用图像恢复网络对所述待恢复图像进行恢复处理,得到恢复的高质量图像,所述恢复的高质量图像的图像清晰度高于所述待恢复图像的图像清晰度;
    其中,所述图像恢复网络是根据多个训练图像对进行训练得到的,所述多个训练图像对中的每个训练图像对包括一个真实高质量图像和一个近似真实低质量图像,所述每个训练图像对中的近似真实低质量图像是对所述每个训练图像对中的真实高质量图像进行处理得到的,所述每个训练图像对中的近似真实低质量图像的图像清晰度与已有的真实低质量图像的图像清晰度的差异度在预设范围内,所述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的。
  29. 如权利要求28所述的图像恢复装置,其特征在于,所述图像清晰度包括图像模糊程度、图像噪声分布情况和图像分辨率中的至少一种。
  30. 如权利要求28或29所述的图像恢复装置,其特征在于,所述已有的真实低质量图像与待恢复图像是采用同一种设备采集得到的,包括:
    所述已有的真实低质量图像是采用第一设备采集得到的,所述待恢复图像是采用第二设备采集得到的,所述第一设备与所述第二设备的设备种类相同,且所述第一设备的图像采集参数和所述第二设备的图像采集参数相同,所述图像采集参数包括焦距,曝光量,快门时间中的至少一种。
  31. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1-8或者9-12中任一项所述的方法。
  32. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求13-15中任一项所述的方法。
  33. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1-8或者9-12中任一项所述的方法。
  34. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求13-15中任一项所述的方法。
PCT/CN2020/093142 2019-09-04 2020-05-29 图像恢复方法、图像恢复网络训练方法、装置和存储介质 WO2021042774A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910834228.3A CN112446835B (zh) 2019-09-04 2019-09-04 图像恢复方法、图像恢复网络训练方法、装置和存储介质
CN201910834228.3 2019-09-04

Publications (1)

Publication Number Publication Date
WO2021042774A1 true WO2021042774A1 (zh) 2021-03-11

Family

ID=74732979

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093142 WO2021042774A1 (zh) 2019-09-04 2020-05-29 图像恢复方法、图像恢复网络训练方法、装置和存储介质

Country Status (2)

Country Link
CN (1) CN112446835B (zh)
WO (1) WO2021042774A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781343A (zh) * 2021-09-13 2021-12-10 叠境数字科技(上海)有限公司 一种超分辨率图像质量提升方法
CN113793396A (zh) * 2021-09-17 2021-12-14 支付宝(杭州)信息技术有限公司 一种基于对抗生成网络训练图像重构模型的方法
WO2024109910A1 (zh) * 2022-11-26 2024-05-30 华为技术有限公司 一种生成模型训练方法、数据转换方法以及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222144B (zh) * 2021-05-31 2022-12-27 北京有竹居网络技术有限公司 图像修复模型的训练方法及图像修复方法、装置及设备
CN114584675B (zh) * 2022-05-06 2022-08-02 中国科学院深圳先进技术研究院 一种自适应视频增强方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040184657A1 (en) * 2003-03-18 2004-09-23 Chin-Teng Lin Method for image resolution enhancement
CN109255769A (zh) * 2018-10-25 2019-01-22 厦门美图之家科技有限公司 图像增强网络的训练方法和训练模型、及图像增强方法
CN109993712A (zh) * 2019-04-01 2019-07-09 腾讯科技(深圳)有限公司 图像处理模型的训练方法、图像处理方法及相关设备
CN110163237A (zh) * 2018-11-08 2019-08-23 腾讯科技(深圳)有限公司 模型训练及图像处理方法、装置、介质、电子设备
CN111105375A (zh) * 2019-12-17 2020-05-05 北京金山云网络技术有限公司 图像生成方法及其模型训练方法、装置及电子设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10943171B2 (en) * 2017-09-01 2021-03-09 Facebook, Inc. Sparse neural network training optimization
CN108537733B (zh) * 2018-04-11 2022-03-11 南京邮电大学 基于多路径深度卷积神经网络的超分辨率重建方法
RU2698402C1 (ru) * 2018-08-30 2019-08-26 Самсунг Электроникс Ко., Лтд. Способ обучения сверточной нейронной сети для восстановления изображения и система для формирования карты глубины изображения (варианты)
CN110163235B (zh) * 2018-10-11 2023-07-11 腾讯科技(深圳)有限公司 图像增强模型的训练、图像增强方法、装置和存储介质
CN109615582B (zh) * 2018-11-30 2023-09-01 北京工业大学 一种基于属性描述生成对抗网络的人脸图像超分辨率重建方法
CN109949219B (zh) * 2019-01-12 2021-03-26 深圳先进技术研究院 一种超分辨率图像的重构方法、装置及设备
CN109671022B (zh) * 2019-01-22 2022-11-18 北京理工大学 一种基于深度特征翻译网络的图片纹理增强超分辨率方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040184657A1 (en) * 2003-03-18 2004-09-23 Chin-Teng Lin Method for image resolution enhancement
CN109255769A (zh) * 2018-10-25 2019-01-22 厦门美图之家科技有限公司 图像增强网络的训练方法和训练模型、及图像增强方法
CN110163237A (zh) * 2018-11-08 2019-08-23 腾讯科技(深圳)有限公司 模型训练及图像处理方法、装置、介质、电子设备
CN109993712A (zh) * 2019-04-01 2019-07-09 腾讯科技(深圳)有限公司 图像处理模型的训练方法、图像处理方法及相关设备
CN111105375A (zh) * 2019-12-17 2020-05-05 北京金山云网络技术有限公司 图像生成方法及其模型训练方法、装置及电子设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781343A (zh) * 2021-09-13 2021-12-10 叠境数字科技(上海)有限公司 一种超分辨率图像质量提升方法
CN113793396A (zh) * 2021-09-17 2021-12-14 支付宝(杭州)信息技术有限公司 一种基于对抗生成网络训练图像重构模型的方法
WO2024109910A1 (zh) * 2022-11-26 2024-05-30 华为技术有限公司 一种生成模型训练方法、数据转换方法以及装置

Also Published As

Publication number Publication date
CN112446835A (zh) 2021-03-05
CN112446835B (zh) 2024-06-18

Similar Documents

Publication Publication Date Title
WO2021043168A1 (zh) 行人再识别网络的训练方法、行人再识别方法和装置
US12008797B2 (en) Image segmentation method and image processing apparatus
WO2021042774A1 (zh) 图像恢复方法、图像恢复网络训练方法、装置和存储介质
WO2021018163A1 (zh) 神经网络的搜索方法及装置
WO2020216227A1 (zh) 图像分类方法、数据处理方法和装置
WO2020177607A1 (zh) 图像去噪方法和装置
WO2021135657A1 (zh) 图像处理方法、装置和图像处理***
WO2021164731A1 (zh) 图像增强方法以及图像增强装置
WO2021043273A1 (zh) 图像增强方法和装置
WO2022134971A1 (zh) 一种降噪模型的训练方法及相关装置
WO2021164234A1 (zh) 图像处理方法以及图像处理装置
WO2021063341A1 (zh) 图像增强方法以及装置
WO2022001372A1 (zh) 训练神经网络的方法、图像处理方法及装置
WO2021018245A1 (zh) 图像分类方法及装置
WO2022022288A1 (zh) 一种图像处理方法以及装置
CN113076685A (zh) 图像重建模型的训练方法、图像重建方法及其装置
CN111951195A (zh) 图像增强方法及装置
CN113011562A (zh) 一种模型训练方法及装置
WO2024002211A1 (zh) 一种图像处理方法及相关装置
WO2021018251A1 (zh) 图像分类方法及装置
CN113065645A (zh) 孪生注意力网络、图像处理方法和装置
WO2022165722A1 (zh) 单目深度估计方法、装置及设备
CN117651965A (zh) 使用神经网络的高清图像操作方法和***
CN115131256A (zh) 图像处理模型、图像处理模型的训练方法及装置
US20220215617A1 (en) Viewpoint image processing method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20860790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20860790

Country of ref document: EP

Kind code of ref document: A1