WO2020118902A1 - 图像处理方法及图像处理*** - Google Patents

图像处理方法及图像处理*** Download PDF

Info

Publication number
WO2020118902A1
WO2020118902A1 PCT/CN2019/075966 CN2019075966W WO2020118902A1 WO 2020118902 A1 WO2020118902 A1 WO 2020118902A1 CN 2019075966 W CN2019075966 W CN 2019075966W WO 2020118902 A1 WO2020118902 A1 WO 2020118902A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
repair unit
exposure
gray
Prior art date
Application number
PCT/CN2019/075966
Other languages
English (en)
French (fr)
Inventor
濮怡莹
陈思宇
金羽锋
Original Assignee
深圳市华星光电半导体显示技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市华星光电半导体显示技术有限公司 filed Critical 深圳市华星光电半导体显示技术有限公司
Publication of WO2020118902A1 publication Critical patent/WO2020118902A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • the present invention relates to the field of display technology, and in particular, to an image processing method and image processing system.
  • HDR images can accurately describe differences in brightness, from faint starlight to bright sunlight, thereby providing viewers with a higher quality viewing experience.
  • Virtual HDR refers to processing low-dynamic range (LDR) pictures into another LDR picture through some tone-mapping algorithms and displaying them on the LDR display device, so as to achieve or close to the display effect of HDR pictures. The essence of this algorithm is to maximize the details and contrast in the picture. It does not require expensive HDR display equipment or hard-to-obtain HDR sources.
  • Neural-like network is a mathematical model that uses a structure similar to brain synaptic connections for information processing. Its biggest advantage is that it can be used as an arbitrary function approximation mechanism to "learn" from the observed data.
  • neural-like networks have been increasingly used in image algorithms, such as super-resolution, denoising, and style processing.
  • image algorithms such as super-resolution, denoising, and style processing.
  • the neural network-like architecture design and loss function settings often play a decisive role in the algorithm results.
  • the design of neural-like network architecture is not as complicated as possible. More and more evidence shows that a suitable and simple neural-like network architecture is often what we need.
  • An object of the present invention is to provide an image processing method that can realize high-quality virtual HDR display simply and quickly.
  • the object of the present invention is also to provide an image processing system that can realize high-quality virtual HDR display simply and quickly.
  • the present invention provides an image processing method, including the following steps:
  • Step S1 Acquire an image set, the image set includes a plurality of image pairs, each image pair includes an original image and a target image corresponding to the original image, the original image and the target image have the same size;
  • Step S2 Provide an image processing module with a neural network-like architecture. Use the image set to train the image processing module, so that when the image processing module receives the original image, it processes the output image and its reception.
  • the target image difference of the original image is the smallest;
  • Step S3 Provide an image to be processed, and input the image to be processed into the trained image processing module to obtain a virtual HDR image corresponding to the image to be processed.
  • the image processing module includes an exposure repair unit, a color repair unit, and an overall adjustment unit connected to both the exposure repair unit and the color repair unit.
  • the step S2 specifically includes:
  • Step S21 Input each original image and the target image into the exposure repair unit, and train the exposure repair unit so that the exposure repair unit processes the exposure of the output first transition image when receiving the original image
  • the difference between the value and the exposure value of the target image corresponding to the original image received is the smallest;
  • Step S22 Input each original image and target image into the color repair unit, and train the color repair unit so that when the color repair unit receives the original image, it processes the color of the output second transition image and The color difference of the target image corresponding to the received original image is the smallest;
  • Step S23 Input each first transition image output by the trained exposure repair unit, each second transition image output by the trained color repair unit, and each target image into the overall adjustment unit, and train the overall adjustment unit So that when the overall adjustment unit receives a first transition image and a second transition image generated from the same original image, it processes the output image and generates the first transition image and the second transition image it receives The original image corresponding to the target image has the smallest difference.
  • step S3 the image to be processed is input from the exposure repair unit and the color repair unit, and after processing by the exposure repair unit, the color repair unit, and the overall adjustment unit, the virtual HDR corresponding to the image to be processed is output from the overall adjustment unit image.
  • the exposure repair unit adopts a multi-layer expansion convolution structure, and its loss function is:
  • K is one pixel in the input image
  • P is the total pixel set of the input image
  • N is the total number of pixels in the input image
  • net1 grey (K) is the pixel K corresponding to the first transition image output by the exposure repair unit
  • GT grey (K) is the gray channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss grey is the exposure loss value of the first transition image relative to the target image
  • the gray channel value is the average of red gray, green gray and blue gray in one pixel;
  • the color repair unit is a multi-layer convolution structure, and its loss function is:
  • net2 rg (K) and GT rg (K) are the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the second transition image processed and output by the color repair unit
  • net2 yb (K) and GT yb (K) is the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss color is the color loss value of the first transition image relative to the target image
  • the red-green channel value is the difference between the red gray scale and the green gray scale in one pixel
  • the yellow-blue channel value is the difference between the yellow gray scale and the blue gray scale in one pixel
  • the yellow gray scale is the red gray scale in a pixel.
  • the overall adjustment unit is a multi-layer convolution structure, and its loss function is:
  • net3(K) is the gray value of the pixel corresponding to the pixel K in the result image output by the overall adjustment unit (303), and GT(K) is the corresponding to the pixel K in the target image corresponding to the input image
  • the gray value of the pixel, Loss end is the image loss value of the first transition image relative to the target image
  • the gray value of the pixel is the sum of red gray, green gray, and blue gray in one pixel.
  • the present invention also provides an image processing system, including a sample acquisition module, a training module connected to the sample acquisition module, and an image processing module connected to the training module with a neural network-like architecture;
  • the sample acquisition module is used to acquire an image set, and the image set includes a plurality of image pairs, each image pair includes an original image and a target image corresponding to the original image, the original image and the target image Same size;
  • the training module is used to train the image processing module using the image set, so that when the image processing module receives the original image, it processes the difference between the resulting image output and the target image of the original image it receives Minimal
  • the image processing module is configured to process the image to be processed after being trained by the training module to generate a virtual HDR image corresponding to the image to be processed.
  • the image processing module includes an exposure repair unit, a color repair unit, and an overall adjustment unit connected to both the exposure repair unit and the color repair unit.
  • the training module training the image processing module specifically includes:
  • Each first transition image output by the trained exposure repair unit, each second transition image output by the trained color repair unit, and each target image are input to the overall adjustment unit, and the overall adjustment unit is trained so that When the overall adjustment unit receives a first transition image and a second transition image generated from the same original image, it processes the output image and the original image that generates the first transition image and the second transition image it receives The corresponding target image has the smallest difference.
  • the image processing module receives the image to be processed from the exposure repair unit and the color repair unit, and after processing by the exposure repair unit, the color repair unit and the overall adjustment unit, outputs a virtual HDR image corresponding to the image to be processed from the overall adjustment unit .
  • the exposure repair unit adopts a multi-layer expansion convolution structure, and its loss function is:
  • K is one pixel in the input image
  • P is the total pixel set of the input image
  • N is the total number of pixels in the input image
  • net1 grey (K) is the pixel K corresponding to the first transition image output by the exposure repair unit
  • GT grey (K) is the gray channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss grey is the exposure loss value of the first transition image relative to the target image
  • the gray channel value is the average of red gray, green gray and blue gray in one pixel;
  • the color repair unit is a multi-layer convolution structure, and its loss function is:
  • net2 rg (K) and GT rg (K) are the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the second transition image processed and output by the color repair unit
  • net2 yb (K) and GT yb (K) is the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss color is the color loss value of the first transition image relative to the target image
  • the red-green channel value is the difference between the red gray scale and the green gray scale in one pixel
  • the yellow-blue channel value is the difference between the yellow gray scale and the blue gray scale in one pixel
  • the yellow gray scale is the red gray scale in a pixel.
  • the overall adjustment unit is a multi-layer convolution structure, and its loss function is:
  • net3(K) is the gray value of the pixel corresponding to the pixel K in the result image output by the overall adjustment unit (303), and GT(K) is the corresponding to the pixel K in the target image corresponding to the input image
  • the gray value of the pixel, Loss end is the image loss value of the first transition image relative to the target image
  • the gray value of the pixel is the sum of red gray, green gray, and blue gray in one pixel.
  • the present invention provides an image processing method, including the following steps: Step S1, acquiring an image set, the image set includes a plurality of image pairs, each image pair includes an original image and an original image The target image corresponding to the image, the original image and the target image have the same size; step S2, an image processing module with a neural network-like architecture is provided, and the image processing module is trained using the image set, so that the When the image processing module receives the original image, the difference between the processed output image and the target image of the received original image is the smallest; Step S3: Provide the image to be processed, and input the image to be processed into the trained image processing module to obtain the The virtual HDR image corresponding to the image to be processed can easily and quickly realize the virtual HDR display with high display quality.
  • the invention also provides an image processing system, which can realize high-quality virtual HDR display simply and quickly.
  • FIG. 2 is a structural diagram of an image processing system of the present invention
  • FIG. 3 is a structural diagram of an image processing module in the image processing system of the present invention.
  • the present invention provides an image processing method, including the following steps:
  • Step S1 Acquire an image set.
  • the image set includes a plurality of image pairs.
  • Each image pair includes an original image and a target image corresponding to the original image.
  • the original image and the target image have the same size.
  • the source of the image set may be an existing image library, or generated by a traditional algorithm, or processed by a retoucher.
  • Step S2 Provide an image processing module 30 with a neural network-like architecture, and use the image set to train the image processing module 30 so that the image processing module 30 processes the output result when receiving the original image
  • the difference between the image and the target image of the original image received is the smallest.
  • the image processing module 30 includes an exposure repair unit 301, a color repair unit 302, and an overall adjustment unit 303 connected to both the exposure repair unit 301 and the color repair unit 302.
  • step S2 specifically includes:
  • Step S21 Input each original image and target image into the exposure repair unit 301, and train the exposure repair unit 301 so that the exposure repair unit 301 processes the output first transition when receiving the original image
  • the difference between the exposure value of the image and the exposure value of the target image corresponding to the original image received is the smallest;
  • Step S22 Input each original image and target image into the color repair unit 302, and train the color repair unit 302, so that when the color repair unit 302 receives the original image, it processes the output second transition image
  • the color of the target image corresponding to the original image it received has the smallest color difference
  • Step S23 Input each first transition image output by the training exposure repair unit 301, each second transition image output by the training color repair unit 302, and each target image to the overall adjustment unit 303, and adjust the overall adjustment
  • the unit 303 performs training so that when the overall adjustment unit 303 receives a first transition image and a second transition image generated from the same original image, it processes the resulting output image and generates the first transition image it receives
  • the target image corresponding to the original image of the second transition image has the smallest difference.
  • the exposure repair unit 301 adopts a multi-layer expansion convolution structure, and its loss function is:
  • K is one pixel in the input image
  • P is the total pixel set of the input image
  • N is the total number of pixels in the input image
  • net1 grey (K) is the pixel K in the first transition image processed and output by the exposure repair unit 301
  • the gray channel value of the corresponding pixel, GT grey (K) is the gray channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss grey is the exposure loss of the first transition image relative to the target image Value
  • the gray channel value is the average of red gray, green gray and blue gray in a pixel
  • the color repair unit 302 is a multi-layer convolution structure, and its loss function is:
  • net2 rg (K) and GT rg (K) are the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the second transition image processed and output by the color restoration unit 302
  • net2 yb (K) And GT yb (K) are the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss color is the color loss value of the first transition image relative to the target image
  • the red-green channel value is the difference between red gray and green gray in one pixel
  • the yellow-blue channel value is the difference between yellow gray and blue gray in one pixel
  • the yellow gray is the red gray in one pixel Half of the sum of the green gray scale
  • the overall adjustment unit 303 is a multi-layer convolution structure, and its loss function is:
  • net3(K) is the gray value of the pixel corresponding to the pixel K in the result image output by the overall adjustment unit (303), and GT(K) is the corresponding to the pixel K in the target image corresponding to the input image
  • the gray value of the pixel, Loss end is the image loss value of the first transition image relative to the target image
  • the gray value of the pixel is the sum of red gray, green gray, and blue gray in one pixel.
  • the size of the convolution kernel of each layer of the exposure repair unit 301 is 3 ⁇ 3, the depth is >20, and the expansion coefficient (Rate) of the convolution kernel is increased for each layer.
  • the preferred embodiment is shown in Table 1, of course This is not a limitation, and other embodiments other than Table 1 may be used if necessary.
  • Using the multi-layer expansion convolution method to establish the exposure repair unit 301 can expand the receptive field of the neural network, but does not increase the amount of network parameters.
  • the size of the convolution kernel of each layer of the color repair unit 302 is 3*3, and the depth is >20. Since it is a non-expansion convolution, the expansion coefficient is 1.
  • the preferred embodiment is shown in Table 2. Of course, the present invention does not limit this If necessary, other embodiments other than Table 2 may be adopted.
  • the size of the convolution kernel of each layer of the overall adjustment unit 303 is 3*3, and the depth is >20. Since it is a non-expansion convolution, the expansion coefficient is 1, and the preferred embodiment is shown in Table 3. Of course, the present invention does not limit this If necessary, other embodiments other than Table 3 may be adopted.
  • the network optimization is set to reverse gradient propagation optimization, and the parameters in the exposure repair unit 301 are trained according to the loss function of the exposure repair unit 301, so that the exposure repair unit 301 receives the original
  • the difference between the exposure value of the first transition image output by the processing and the exposure value of the target image corresponding to the original image it receives is the smallest.
  • the network optimization is set to reverse gradient propagation optimization, and the parameters in the color repair unit 302 are trained according to the loss function of the color repair unit 302, so that the exposure repair unit 301 receives the original
  • the color of the second transition image processed by the processing output and the color of the target image corresponding to the original image it receives are the smallest.
  • the network optimization is set to reverse gradient propagation optimization, and the parameters in the overall adjustment unit 303 are trained according to the loss function of the overall adjustment unit 303, so that the exposure repair unit 301 receives the original
  • the difference between the resulting image output from the processing and the target image corresponding to the original image it receives is the smallest.
  • Step S3 Provide an image to be processed, input the image to be processed into the image processing module 30 after training, and obtain a virtual HDR image corresponding to the image to be processed.
  • step S3 the image to be processed is input from the exposure repair unit 301 and the color repair unit 302, and after processing by the exposure repair unit 301, the color repair unit 302, and the overall adjustment unit 303, from the overall adjustment unit 303
  • the virtual HDR image corresponding to the image to be processed is output.
  • the present invention has designed a two-way neural network to implement an effective virtual HDR algorithm, thereby achieving high-quality virtual HDR display simply and quickly. Make the virtual HDR display achieve reasonable exposure and display more details, while reducing color cast, and the calculation speed is fast.
  • the present invention also provides an image processing system, including a sample acquisition module 10, a training module 20 connected to the sample acquisition module 20, and a neural network-like architecture connected to the training module 20
  • the sample acquisition module 10 is used to acquire an image set, the image set includes a plurality of image pairs, each image pair includes an original image and a target image corresponding to the original image, the original image and the target The size of the image is the same;
  • the training module 20 is configured to train the image processing module 30 by using the image set, so that when the image processing module 30 receives the original image, it processes the output image and the original image it receives The target image has the smallest difference;
  • the image processing module 30 is configured to process the image to be processed after being trained by the training module 20 to generate a virtual HDR image corresponding to the image to be processed.
  • the source of the image set may be an existing image library, or generated by a traditional algorithm, or processed by a retoucher.
  • the image processing module 30 includes an exposure repair unit 301, a color repair unit 302, and an overall adjustment unit 303 connected to both the exposure repair unit 301 and the color repair unit 302.
  • training module 20 training the image processing module 30 specifically includes:
  • Each first transition image output by the trained exposure repair unit 301, each second transition image output by the trained color repair unit 302, and each target image are input to the overall adjustment unit 303, and the overall adjustment unit 303 is subjected to Training such that when the overall adjustment unit 303 receives a first transition image and a second transition image generated from the same original image, it processes the resulting image output and generates the first transition image and the second transition image it receives The target image corresponding to the original image of the transition image has the smallest difference.
  • the exposure repair unit 301 adopts a multi-layer expansion convolution structure, and its loss function is:
  • K is one pixel in the input image
  • P is the total pixel set of the input image
  • N is the total number of pixels in the input image
  • net1 grey (K) is the pixel K in the first transition image processed and output by the exposure repair unit 301
  • the gray channel value of the corresponding pixel, GT grey (K) is the gray channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • Loss grey is the exposure loss of the first transition image relative to the target image Value
  • the gray channel value is the average of red gray, green gray and blue gray in a pixel
  • the color repair unit 302 is a multi-layer convolution structure, and its loss function is:
  • net2 rg (K) and GT rg (K) are the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the second transition image processed and output by the color restoration unit 302
  • net2 yb (K) And GT yb (K) are the red-green channel value and the yellow-blue channel value of the pixel corresponding to the pixel K in the target image corresponding to the input image
  • the Loss color is each pixel of the first transition image relative to the target image.
  • the average value of the color loss of a pixel is the difference between red grayscale and green grayscale in one pixel, and the yellow-blue channel value is the difference between yellow grayscale and blue grayscale in one pixel, yellow Gray scale is half of the sum of red gray scale and green gray scale in one pixel;
  • the overall adjustment unit 303 is a multi-layer convolution structure, and its loss function is:
  • net3(K) is the gray value of the pixel corresponding to the pixel K in the result image processed by the overall adjustment unit (303), and GT(K) is the corresponding to the pixel K in the target image corresponding to the input image
  • the gray value of the pixel, Loss end is the image loss value of the first transition image relative to the target image
  • the gray value of the pixel is the sum of red gray, green gray, and blue gray in one pixel.
  • the size of the convolution kernel of each layer of the exposure repair unit 301 is 3 ⁇ 3, the depth is >20, and the expansion coefficient (Rate) of the convolution kernel is increased for each layer.
  • the preferred embodiment is shown in Table 1, of course This is not a limitation, and other embodiments other than Table 1 may be used if necessary.
  • Using the multi-layer expansion convolution method to establish the exposure repair unit 301 can expand the receptive field of the neural network, but does not increase the amount of network parameters.
  • the size of the convolution kernel of each layer of the color repair unit 302 is 3*3, and the depth is >20. Since it is a non-expansion convolution, the expansion coefficient is 1.
  • the preferred embodiment is shown in Table 2. Of course, the present invention does not limit this If necessary, other embodiments other than Table 2 may be adopted.
  • the size of the convolution kernel of each layer of the overall adjustment unit 303 is 3*3, and the depth is >20. Since it is a non-expansion convolution, the expansion coefficient is 1, and the preferred embodiment is shown in Table 3. Of course, the present invention does not limit this If necessary, other embodiments other than Table 3 may be adopted.
  • the network optimization is set to reverse gradient propagation optimization, and the parameters in the exposure repair unit 301 are trained according to the loss function of the exposure repair unit 301, so that the exposure repair unit 301 receives the original
  • the difference between the exposure value of the first transition image output by the processing and the exposure value of the target image corresponding to the original image it receives is the smallest.
  • the network optimization is set to reverse gradient propagation optimization, and the parameters in the color repair unit 302 are trained according to the loss function of the color repair unit 302, so that the exposure repair unit 301 receives the original
  • the color of the second transition image processed by the processing output and the color of the target image corresponding to the original image it receives are the smallest.
  • the network optimization is set to reverse gradient propagation optimization, and the parameters in the overall adjustment unit 303 are trained according to the loss function of the overall adjustment unit 303, so that the exposure repair unit 301 receives the original
  • the difference between the resulting image output from the processing and the target image corresponding to the original image it receives is the smallest.
  • the image processing module 30 receives the image to be processed from the exposure repair unit 301 and the color repair unit 302, and after processing by the exposure repair unit 301, the color repair unit 302, and the overall adjustment unit 303, outputs the pending image from the overall adjustment unit 303 Process the virtual HDR image corresponding to the image.
  • the present invention has designed a two-way neural network to implement an effective virtual HDR algorithm, thereby achieving high-quality virtual HDR display simply and quickly. Make the virtual HDR display achieve reasonable exposure and display more details, while reducing color cast, and the calculation speed is fast.
  • the present invention provides an image processing method, including the following steps: Step S1, acquiring an image set, the image set includes a plurality of image pairs, each image pair includes an original image and an original image Corresponding to the target image, the original image and the target image have the same size; Step S2, an image processing module with a neural network-like architecture is provided, and the image processing module is trained using the image set so that the image When the processing module receives the original image, the difference between the processed output image and the target image of the received original image is the smallest; Step S3: Provide the image to be processed and input the image to be processed into the trained image processing module to obtain the image Processing the virtual HDR image corresponding to the image can easily and quickly realize the virtual HDR display with high display quality.
  • the invention also provides an image processing system, which can realize high-quality virtual HDR display simply and quickly.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明提供一种图像处理方法及图像处理***。所述图像处理方法包括如下步骤:步骤S1、获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;步骤S2、提供一具有类神经网络架构的图像处理模块,利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;步骤S3、提供待处理图像,将待处理图像输入训练后的图像处理模块,得到该待处理图像对应的虚拟HDR图像,能够简单快捷的实现高显示显示质量的虚拟HDR显示。

Description

图像处理方法及图像处理*** 技术领域
本发明涉及显示技术领域,尤其涉及一种图像处理方法及图像处理***。
背景技术
近年来,高动态范围(High-Dynamic Range,HDR)图像的应用逐渐成为图像处理领域的一个热门研究课题。HDR图像能够准确地描述亮度差异,从微弱的星光到明亮的阳光,从而为观众提供更高质量的观看体验。
然而,由于当前的HDR相关设备通常都很昂贵,实际应用范围有限,大多数图像处理***仍然配备了传统的LDR设备。同时,HDR图像和视频的获取设备也非常复杂和昂贵,许多缺陷也尚未解决,比如多曝光的拍摄装置往往带来图像伪影。所以虚拟HDR(Pseudo-HDR)应运而生。虚拟HDR是指通过一些色调映射算法,将低动态范围(Low-Dynamic Range,LDR)图片处理成另一张LDR图片在LDR显示设备上显示,使其达到或接近HDR图片的显示效果。这种算法的根本是将图片中的细节和对比度最大程度的展现出来。它不需要昂贵的HDR显示设备,也不需要难以获取的HDR片源。
虚拟HDR算法的难点有两个:第一是曝光是否合理,是否把细节最大程度的展现;第二是否存在严重色偏。现有的虚拟HDR方法有基于直方图的、基于多曝光融合的及基于暗通道的等等,每类方法都有适合的图片类型,如果要处理所有的图片类型,算法将变得极为复杂。
类神经网络是一种应用类似于大脑神经突触联接的结构进行信息处理的数学模型。它最大优势是能够被用作一个任意函数逼近的机制,从观测到的数据“学习”。近年来,类神经网络被越来越多地用在图像算法上,例如超分辨率,去噪,风格处理等。但是类神经网络的架构设计和损失函数的设置往往对算法结果起着决定性的作用。实际应用时,类神经网络的架构的设计并非越复杂越好,越来越多的证据显示,适合的且简单的类神经网络的架构往往是我们需要的。
发明内容
本发明的目的在于提供一种图像处理方法,能够简单快捷的实现高质 量的虚拟HDR显示。
本发明的目的还在于提供一种图像处理***,能够简单快捷的实现高质量的虚拟HDR显示。
为实现上述目的,本发明提供了一种图像处理方法,包括如下步骤:
步骤S1、获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;
步骤S2、提供一具有类神经网络架构的图像处理模块,利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;
步骤S3、提供待处理图像,将待处理图像输入训练后的图像处理模块,得到该待处理图像对应的虚拟HDR图像。
所述图像处理模块包括曝光修复单元、色彩修复单元及与所述曝光修复单元和色彩修复单元均相连的整体调节单元。
所述步骤S2具体包括:
步骤S21、将各个原始图像和与目标图像输入至曝光修复单元中,对所述曝光修复单元进行训练,使得所述曝光修复单元在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小;
步骤S22、将各个原始图像和目标图像输入至色彩修复单元中,对所述色彩修复单元进行训练,使得所述色彩修复单元在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小;
步骤S23、将由训练后的曝光修复单元输出的各个第一过渡图像、由训练后的色彩修复单元输出的各个第二过渡图像及各个目标图像输入至整体调节单元,对所述整体调节单元进行训练,使得所述整体调节单元在接收到由同一原始图像产生的一第一过渡图像和一第二过渡图像时,其处理输出的结果图像与产生其接收到的第一过渡图像和第二过渡图像的原始图像对应的目标图像差异最小。
所述步骤S3中将待处理图像从曝光修复单元和色彩修复单元输入,经过曝光修复单元、色彩修复单元及整体调节单元的处理后,从所述整体调节单元输出该待处理图像对应的虚拟HDR图像。
所述曝光修复单元采用多层膨胀卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000001
其中,K输入图像中的一个像素,P为输入图像的全部像素集合,N为输入图像的像素总数,net1 grey(K)为在由曝光修复单元处理输出的第一过渡图像中与像素K对应的像素的灰度通道值,GT grey(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度通道值,Loss grey为第一过渡图像相对于目标图像的曝光度损失值;灰度通道值为一个像素中红色灰度、绿色灰度及蓝色灰度的平均值;
所述色彩修复单元为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000002
其中,net2 rg(K)及GT rg(K)分别在由色彩修复单元处理输出的第二过渡图像中与像素K对应的像素的红绿通道值及黄蓝通道值,net2 yb(K)及GT yb(K)分别为在输入图像对应的目标图像中与像素K对应的像素的红绿通道值及黄蓝通道值,Loss color为第一过渡图像相对于目标图像的色彩损失值;所述红绿通道值为一个像素中红色灰度与绿色灰度的差,所述黄蓝通道值为一个像素中黄色灰度与蓝色灰度的差,黄色灰度为一个像素中红色灰度与绿色灰度的和的一半;
所述整体调节单元为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000003
其中,net3(K)为在由整体调整单元(303)处理输出的结果图像中与像素K对应的像素的灰度值,GT(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度值,Loss end为第一过渡图像相对于目标图像的图像损失值,像素的灰度值为一个像素中红色灰度、绿色灰度及蓝色灰度的和。
本发明还提供一种图像处理***,包括样本获取模块、与所述样本获取模块相连的训练模块及与所述训练模块相连的具有类神经网络架构的图像处理模块;
所述样本获取模块,用于获取图像集,所述图像集多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原 始图像与目标图像的尺寸相同;
所述训练模块,用于利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;
所述图像处理模块,用于在经过训练模块的训练后对待处理图像进行处理,产生与所述待处理图像对应的虚拟HDR图像。
所述图像处理模块包括曝光修复单元、色彩修复单元及与所述曝光修复单元和色彩修复单元均相连的整体调节单元。
所述训练模块对所述图像处理模块进行训练具体包括:
将各个原始图像和与目标图像输入至曝光修复单元中,对所述曝光修复单元进行训练,使得所述曝光修复单元在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小;
将各个原始图像和目标图像输入至色彩修复单元中,对所述色彩修复单元进行训练,使得所述色彩修复单元在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小;
将由训练后的曝光修复单元输出的各个第一过渡图像、由训练后的色彩修复单元输出的各个第二过渡图像及各个目标图像输入至整体调节单元,对所述整体调节单元进行训练,使得所述整体调节单元在接收到由同一原始图像产生的一第一过渡图像和一第二过渡图像时,其处理输出的结果图像与产生其接收到的第一过渡图像和第二过渡图像的原始图像对应的目标图像差异最小。
所述图像处理模块从曝光修复单元和色彩修复单元接收待处理图像,经过曝光修复单元、色彩修复单元及整体调节单元的处理后,从所述整体调节单元输出该待处理图像对应的虚拟HDR图像。
所述曝光修复单元采用多层膨胀卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000004
其中,K输入图像中的一个像素,P为输入图像的全部像素集合,N为输入图像的像素总数,net1 grey(K)为在由曝光修复单元处理输出的第一过渡图像中与像素K对应的像素的灰度通道值,GT grey(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度通道值,Loss grey为第一过渡 图像相对于目标图像的曝光度损失值;灰度通道值为一个像素中红色灰度、绿色灰度及蓝色灰度的平均值;
所述色彩修复单元为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000005
其中,net2 rg(K)及GT rg(K)分别在由色彩修复单元处理输出的第二过渡图像中与像素K对应的像素的红绿通道值及黄蓝通道值,net2 yb(K)及GT yb(K)分别为在输入图像对应的目标图像中与像素K对应的像素的红绿通道值及黄蓝通道值,Loss color为第一过渡图像相对于目标图像的色彩损失值;所述红绿通道值为一个像素中红色灰度与绿色灰度的差,所述黄蓝通道值为一个像素中黄色灰度与蓝色灰度的差,黄色灰度为一个像素中红色灰度与绿色灰度的和的一半;
所述整体调节单元为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000006
其中,net3(K)为在由整体调整单元(303)处理输出的结果图像中与像素K对应的像素的灰度值,GT(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度值,Loss end为第一过渡图像相对于目标图像的图像损失值,像素的灰度值为一个像素中红色灰度、绿色灰度及蓝色灰度的和。
本发明的有益效果:本发明提供一种图像处理方法,包括如下步骤:步骤S1、获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;步骤S2、提供一具有类神经网络架构的图像处理模块,利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;步骤S3、提供待处理图像,将待处理图像输入训练后的图像处理模块,得到该待处理图像对应的虚拟HDR图像,能够简单快捷的实现高显示显示质量的虚拟HDR显示。本发明还提供一种图像处理***,能够简单快捷的实现高质量的虚拟HDR显示。
附图说明
为了能更进一步了解本发明的特征以及技术内容,请参阅以下有关本发明的详细说明与附图,然而附图仅提供参考与说明用,并非用来对本发明加以限制。
附图中,
图1为本发明的图像处理方法的流程图;
图2为本发明的图像处理***的结构图;
图3为本发明的图像处理***中图像处理模块的结构图。
具体实施方式
为更进一步阐述本发明所采取的技术手段及其效果,以下结合本发明的优选实施例及其附图进行详细描述。
请参阅图1并结合图3,本发明提供一种图像处理方法,包括如下步骤:
步骤S1、获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同。
具体地,所述图像集的来源可以是现有的图像库,或由传统算法生成,或由修图师加工完成。
步骤S2、提供一具有类神经网络架构的图像处理模块30,利用所述图像集对所述图像处理模块30进行训练,使得所述图像处理模块30在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小。
具体地,所述图像处理模块30包括曝光修复单元301、色彩修复单元302及与所述曝光修复单元301和色彩修复单元302均相连的整体调节单元303。
进一步地,所述步骤S2具体包括:
步骤S21、将各个原始图像和与目标图像输入至曝光修复单元301中,对所述曝光修复单元301进行训练,使得所述曝光修复单元301在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小;
步骤S22、将各个原始图像和目标图像输入至色彩修复单元302中,对所述色彩修复单元302进行训练,使得所述色彩修复单元302在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小;
步骤S23、将由训练后的曝光修复单元301输出的各个第一过渡图像、由训练后的色彩修复单元302输出的各个第二过渡图像及各个目标图像输入至整体调节单元303,对所述整体调节单元303进行训练,使得所述整体调节单元303在接收到由同一原始图像产生的一第一过渡图像和一第二过渡图像时,其处理输出的结果图像与产生其接收到的第一过渡图像和第二过渡图像的原始图像对应的目标图像差异最小。
具体地,所述曝光修复单元301采用多层膨胀卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000007
其中,K输入图像中的一个像素,P为输入图像的全部像素集合,N为输入图像的像素总数,net1 grey(K)为在由曝光修复单元301处理输出的第一过渡图像中与像素K对应的像素的灰度通道值,GT grey(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度通道值,Loss grey为第一过渡图像相对于目标图像的曝光度损失值;灰度通道值为一个像素中红色灰度、绿色灰度及蓝色灰度的平均值;
所述色彩修复单元302为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000008
其中,net2 rg(K)及GT rg(K)分别在由色彩修复单元302处理输出的第二过渡图像中与像素K对应的像素的红绿通道值及黄蓝通道值,net2 yb(K)及GT yb(K)分别为在输入图像对应的目标图像中与像素K对应的像素的红绿通道值及黄蓝通道值,Loss color为第一过渡图像相对于目标图像的色彩损失值;所述红绿通道值为一个像素中红色灰度与绿色灰度的差,所述黄蓝通道值为一个像素中黄色灰度与蓝色灰度的差,黄色灰度为一个像素中红色灰度与绿色灰度的和的一半;
所述整体调节单元303为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000009
其中,net3(K)为在由整体调整单元(303)处理输出的结果图像中与像素K对应的像素的灰度值,GT(K)为在输入图像对应的目标图像中 与像素K对应的像素的灰度值,Loss end为第一过渡图像相对于目标图像的图像损失值,像素的灰度值为一个像素中红色灰度、绿色灰度及蓝色灰度的和。
进一步地,所述曝光修复单元301的每层卷积核尺寸为3×3,深度>20,卷积核的膨胀系数(Rate)每层递增,优选实施例如表1所示,当然本发明对此并不限制,必要时也可以采用表1以外的其他实施方式。采用多层膨胀卷积的方法建立曝光修复单元301可以扩大神经网络的感受野,但是又不会增加网络的参数量。
卷积核 膨胀系数 卷积核深度
1 3*3 1 32
2 3*3 2 32
3 3*3 5 32
4 3*3 7 32
5 3*3 17 32
6 3*3 33 32
7 3*3 67 32
8 3*3 127 32
9 3*3 1 3
表1
所述色彩修复单元302每层卷积核尺寸为3*3,深度>20,由于为非膨胀卷积,所以膨胀系数为1,优选实施例如表2所示,当然本发明对此并不限制,必要时也可以采用表2以外的其他实施方式。
卷积核 膨胀系数 卷积核深度
1 3*3 1 32
2 3*3 1 32
3 3*3 1 32
4 3*3 1 32
5 3*3 1 32
6 3*3 1 32
7 3*3 1 32
表2
所述整体调节单元303每层卷积核尺寸为3*3,深度>20,由于为非膨 胀卷积,所以膨胀系数为1,优选实施例如表3所示,当然本发明对此并不限制,必要时也可以采用表3以外的其他实施方式。
卷积核 膨胀系数 卷积核深度
1 3*3 1 32
2 3*3 1 32
3 3*3 1 32
4 3*3 1 3
表3
具体地,对曝光修复单元301进行训练时,网络优化设置为反向梯度传播优化,按照曝光修复单元301的损失函数训练曝光修复单元301中的参数,使得所述曝光修复单元301在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小。
具体地,对色彩修复单元302进行训练时,网络优化设置为反向梯度传播优化,按照色彩修复单元302的损失函数训练色彩修复单元302中的参数,使得所述曝光修复单元301在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小。
具体地,对整体调节单元303进行训练时,网络优化设置为反向梯度传播优化,按照整体调节单元303的损失函数训练整体调节单元303中的参数,使得所述曝光修复单元301在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像对应的目标图像差异最小。
步骤S3、提供待处理图像,将待处理图像输入训练后的图像处理模块30,得到该待处理图像对应的虚拟HDR图像。
具体地,所述步骤S3中将待处理图像从曝光修复单元301和色彩修复单元302输入,经过曝光修复单元301、色彩修复单元302及整体调节单元303的处理后,从所述整体调节单元303输出该待处理图像对应的虚拟HDR图像。
从而本发明根据HDR图像的特点,基于合理曝光和减小色偏这两个角度,设计了一个两路类神经网络,实现有效的虚拟HDR算法,从而简单快捷的实现高质量的虚拟HDR显示,使虚拟HDR显示实现合理曝光,并展现更多细节,同时减小色偏,且运算速度快。
请参阅图2和图3,本发明还提供一种图像处理***,包括样本获取模 块10、与所述样本获取模块20相连的训练模块20及与所述训练模块20相连的具有类神经网络架构的图像处理模块30;
所述样本获取模块10,用于获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;
所述训练模块20,用于利用所述图像集对所述图像处理模块30进行训练,使得所述图像处理模块30在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;
所述图像处理模块30,用于在经过训练模块20的训练后对待处理图像进行处理,产生与所述待处理图像对应的虚拟HDR图像。
具体地,所述图像集的来源可以是现有的图像库,或由传统算法生成,或由修图师加工完成。
具体地,所述图像处理模块30包括曝光修复单元301、色彩修复单元302及与所述曝光修复单元301和色彩修复单元302均相连的整体调节单元303。
进一步地,所述训练模块20对所述图像处理模块30进行训练具体包括:
将各个原始图像和与目标图像输入至曝光修复单元301中,对所述曝光修复单元301进行训练,使得所述曝光修复单元301在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小;
将各个原始图像和目标图像输入至色彩修复单元302中,对所述色彩修复单元302进行训练,使得所述色彩修复单元302在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小;
将由训练后的曝光修复单元301输出的各个第一过渡图像、由训练后的色彩修复单元302输出的各个第二过渡图像及各个目标图像输入至整体调节单元303,对所述整体调节单元303进行训练,使得所述整体调节单元303在接收到由同一原始图像产生的一第一过渡图像和一第二过渡图像时,其处理输出的结果图像与产生其接收到的第一过渡图像和第二过渡图像的原始图像对应的目标图像差异最小。
具体地,所述曝光修复单元301采用多层膨胀卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000010
其中,K输入图像中的一个像素,P为输入图像的全部像素集合,N为输入图像的像素总数,net1 grey(K)为在由曝光修复单元301处理输出的第一过渡图像中与像素K对应的像素的灰度通道值,GT grey(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度通道值,Loss grey为第一过渡图像相对于目标图像的曝光度损失值;灰度通道值为一个像素中红色灰度、绿色灰度及蓝色灰度的平均值;
所述色彩修复单元302为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000011
其中,net2 rg(K)及GT rg(K)分别在由色彩修复单元302处理输出的第二过渡图像中与像素K对应的像素的红绿通道值及黄蓝通道值,net2 yb(K)及GT yb(K)分别为在输入图像对应的目标图像中与像素K对应的像素的红绿通道值及黄蓝通道值,Loss color为第一过渡图像中的各个像素相对于目标图像的各个像素的色彩损失的平均值;所述红绿通道值为一个像素中红色灰度与绿色灰度的差,所述黄蓝通道值为一个像素中黄色灰度与蓝色灰度的差,黄色灰度为一个像素中红色灰度与绿色灰度的和的一半;
所述整体调节单元303为多层卷积结构,其损失函数为:
Figure PCTCN2019075966-appb-000012
其中,net3(K)为在由整体调整单元(303)处理输出的结果图像中与像素K对应的像素的灰度值,GT(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度值,Loss end为第一过渡图像相对于目标图像的图像损失值,像素的灰度值为一个像素中红色灰度、绿色灰度及蓝色灰度的和。
进一步地,所述曝光修复单元301的每层卷积核尺寸为3×3,深度>20,卷积核的膨胀系数(Rate)每层递增,优选实施例如表1所示,当然本发明对此并不限制,必要时也可以采用表1以外的其他实施方式。采用多层膨胀卷积的方法建立曝光修复单元301可以扩大神经网络的感受野,但是又不会增加网络的参数量。
卷积核 膨胀系数 卷积核深度
1 3*3 1 32
2 3*3 2 32
3 3*3 5 32
4 3*3 7 32
5 3*3 17 32
6 3*3 33 32
7 3*3 67 32
8 3*3 127 32
9 3*3 1 3
表1
所述色彩修复单元302每层卷积核尺寸为3*3,深度>20,由于为非膨胀卷积,所以膨胀系数为1,优选实施例如表2所示,当然本发明对此并不限制,必要时也可以采用表2以外的其他实施方式。
卷积核 膨胀系数 卷积核深度
1 3*3 1 32
2 3*3 1 32
3 3*3 1 32
4 3*3 1 32
5 3*3 1 32
6 3*3 1 32
7 3*3 1 32
表2
所述整体调节单元303每层卷积核尺寸为3*3,深度>20,由于为非膨胀卷积,所以膨胀系数为1,优选实施例如表3所示,当然本发明对此并不限制,必要时也可以采用表3以外的其他实施方式。
卷积核 膨胀系数 卷积核深度
1 3*3 1 32
2 3*3 1 32
3 3*3 1 32
4 3*3 1 3
表3
具体地,对曝光修复单元301进行训练时,网络优化设置为反向梯度 传播优化,按照曝光修复单元301的损失函数训练曝光修复单元301中的参数,使得所述曝光修复单元301在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小。
具体地,对色彩修复单元302进行训练时,网络优化设置为反向梯度传播优化,按照色彩修复单元302的损失函数训练色彩修复单元302中的参数,使得所述曝光修复单元301在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小。
具体地,对整体调节单元303进行训练时,网络优化设置为反向梯度传播优化,按照整体调节单元303的损失函数训练整体调节单元303中的参数,使得所述曝光修复单元301在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像对应的目标图像差异最小。
所述图像处理模块30从曝光修复单元301和色彩修复单元302接收待处理图像,经过曝光修复单元301、色彩修复单元302及整体调节单元303的处理后,从所述整体调节单元303输出该待处理图像对应的虚拟HDR图像。
从而本发明根据HDR图像的特点,基于合理曝光和减小色偏这两个角度,设计了一个两路类神经网络,实现有效的虚拟HDR算法,从而简单快捷的实现高质量的虚拟HDR显示,使虚拟HDR显示实现合理曝光,并展现更多细节,同时减小色偏,且运算速度快。
综上所述,本发明提供一种图像处理方法,包括如下步骤:步骤S1、获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;步骤S2、提供一具有类神经网络架构的图像处理模块,利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;步骤S3、提供待处理图像,将待处理图像输入训练后的图像处理模块,得到该待处理图像对应的虚拟HDR图像,能够简单快捷的实现高显示显示质量的虚拟HDR显示。本发明还提供一种图像处理***,能够简单快捷的实现高质量的虚拟HDR显示。
以上所述,对于本领域的普通技术人员来说,可以根据本发明的技术方案和技术构思作出其他各种相应的改变和变形,而所有这些改变和变形都应属于本发明权利要求的保护范围。

Claims (10)

  1. 一种图像处理方法,包括如下步骤:
    步骤S1、获取图像集,所述图像集包括多个图像对,每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;
    步骤S2、提供一具有类神经网络架构的图像处理模块,利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像的差异最小;
    步骤S3、提供待处理图像,将待处理图像输入训练后的图像处理模块,得到该待处理图像对应的虚拟HDR图像。
  2. 如权利要求1所述的图像处理方法,其中,所述图像处理模块包括曝光修复单元、色彩修复单元及与所述曝光修复单元和色彩修复单元均相连的整体调节单元。
  3. 如权利要求2所述的图像处理方法,其中,所述步骤S2具体包括:
    步骤S21、将各个原始图像和与目标图像输入至曝光修复单元中,对所述曝光修复单元进行训练,使得所述曝光修复单元在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值的差异最小;
    步骤S22、将各个原始图像和目标图像输入至色彩修复单元中,对所述色彩修复单元进行训练,使得所述色彩修复单元在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩的差异最小;
    步骤S23、将由训练后的曝光修复单元输出的各个第一过渡图像、由训练后的色彩修复单元输出的各个第二过渡图像及各个目标图像输入至整体调节单元,对所述整体调节单元进行训练,使得所述整体调节单元在接收到由同一原始图像产生的一第一过渡图像和一第二过渡图像时,其处理输出的结果图像与产生其接收到的第一过渡图像和第二过渡图像的原始图像对应的目标图像的差异最小。
  4. 如权利要求3所述的图像处理方法,其中,所述步骤S3中将待处理图像从曝光修复单元和色彩修复单元输入,经过曝光修复单元、色彩修复单元及整体调节单元的处理后,从所述整体调节单元输出该待处理图像 对应的虚拟HDR图像。
  5. 如权利要求3所述的图像处理方法,其中,所述曝光修复单元采用多层膨胀卷积结构,其损失函数为:
    Figure PCTCN2019075966-appb-100001
    其中,K输入图像中的一个像素,P为输入图像的全部像素集合,N为输入图像的像素总数,net1 grey(K)为在由曝光修复单元处理输出的第一过渡图像中与像素K对应的像素的灰度通道值,GT grey(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度通道值,Loss grey为第一过渡图像相对于目标图像的曝光度损失值;灰度通道值为一个像素中红色灰度、绿色灰度及蓝色灰度的平均值;
    所述色彩修复单元为多层卷积结构,其损失函数为:
    Figure PCTCN2019075966-appb-100002
    其中,net2 rg(K)及GT rg(K)分别在由色彩修复单元处理输出的第二过渡图像中与像素K对应的像素的红绿通道值及黄蓝通道值,net2 yb(K)及GT yb(K)分别为在输入图像对应的目标图像中与像素K对应的像素的红绿通道值及黄蓝通道值,Loss color为第一过渡图像相对于目标图像的色彩损失值;所述红绿通道值为一个像素中红色灰度与绿色灰度的差,所述黄蓝通道值为一个像素中黄色灰度与蓝色灰度的差,黄色灰度为一个像素中红色灰度与绿色灰度的和的一半;
    所述整体调节单元为多层卷积结构,其损失函数为:
    Figure PCTCN2019075966-appb-100003
    其中,net3(K)为在由整体调整单元处理输出的结果图像中与像素K对应的像素的灰度值,GT(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度值,Loss end为第一过渡图像相对于目标图像的图像损失值,像素的灰度值为一个像素中红色灰度、绿色灰度及蓝色灰度的和。
  6. 一种图像处理***,包括样本获取模块、与所述样本获取模块相连的训练模块及与所述训练模块相连的具有类神经网络架构的图像处理模块;
    所述样本获取模块,用于获取图像集,所述图像集包括多个图像对, 每一个图像对包括一原始图像和一与所述原始图像相对应的目标图像,所述原始图像与目标图像的尺寸相同;
    所述训练模块,用于利用所述图像集对所述图像处理模块进行训练,使得所述图像处理模块在接收到原始图像时,其处理输出的结果图像与其接收到的原始图像的目标图像差异最小;
    所述图像处理模块,用于在经过训练模块的训练后对待处理图像进行处理,产生与所述待处理图像对应的虚拟HDR图像。
  7. 如权利要求6所述的图像处理***,其中,所述图像处理模块包括曝光修复单元、色彩修复单元及与所述曝光修复单元和色彩修复单元均相连的整体调节单元。
  8. 如权利要求7所述的图像处理***,其中,所述训练模块对所述图像处理模块进行训练具体包括:
    将各个原始图像和与目标图像输入至曝光修复单元中,对所述曝光修复单元进行训练,使得所述曝光修复单元在接收到原始图像时,其处理输出的第一过渡图像的曝光值与其接收到的原始图像对应的目标图像的曝光值差异最小;
    将各个原始图像和目标图像输入至色彩修复单元中,对所述色彩修复单元进行训练,使得所述色彩修复单元在接收到原始图像时,其处理输出的第二过渡图像的色彩与其接收到的原始图像对应的目标图像的色彩差异最小;
    将由训练后的曝光修复单元输出的各个第一过渡图像、由训练后的色彩修复单元输出的各个第二过渡图像及各个目标图像输入至整体调节单元,对所述整体调节单元进行训练,使得所述整体调节单元在接收到由同一原始图像产生的一第一过渡图像和一第二过渡图像时,其处理输出的结果图像与产生其接收到的第一过渡图像和第二过渡图像的原始图像对应的目标图像差异最小。
  9. 如权利要求8所述的图像处理***,其中,所述图像处理模块从曝光修复单元和色彩修复单元接收待处理图像,经过曝光修复单元、色彩修复单元及整体调节单元的处理后,从所述整体调节单元输出该待处理图像对应的虚拟HDR图像。
  10. 如权利要求8所述的图像处理***,其中,所述曝光修复单元采用多层膨胀卷积结构,其损失函数为:
    Figure PCTCN2019075966-appb-100004
    其中,K输入图像中的一个像素,P为输入图像的全部像素集合,N为输入图像的像素总数,net1 grey(K)为在由曝光修复单元处理输出的第一过渡图像中与像素K对应的像素的灰度通道值,GT grey(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度通道值,Loss grey为第一过渡图像相对于目标图像的曝光度损失值;灰度通道值为一个像素中红色灰度、绿色灰度及蓝色灰度的平均值;
    所述色彩修复单元为多层卷积结构,其损失函数为:
    Figure PCTCN2019075966-appb-100005
    其中,net2 rg(K)及GT rg(K)分别在由色彩修复单元处理输出的第二过渡图像中与像素K对应的像素的红绿通道值及黄蓝通道值,net2 yb(K)及GT yb(K)分别为在输入图像对应的目标图像中与像素K对应的像素的红绿通道值及黄蓝通道值,Loss color为第一过渡图像相对于目标图像的色彩损失值;所述红绿通道值为一个像素中红色灰度与绿色灰度的差,所述黄蓝通道值为一个像素中黄色灰度与蓝色灰度的差,黄色灰度为一个像素中红色灰度与绿色灰度的和的一半;
    所述整体调节单元为多层卷积结构,其损失函数为:
    Figure PCTCN2019075966-appb-100006
    其中,net3(K)为在由整体调整单元处理输出的结果图像中与像素K对应的像素的灰度值,GT(K)为在输入图像对应的目标图像中与像素K对应的像素的灰度值,Loss end为第一过渡图像相对于目标图像的图像损失值,像素的灰度值为一个像素中红色灰度、绿色灰度及蓝色灰度的和。
PCT/CN2019/075966 2018-12-14 2019-02-22 图像处理方法及图像处理*** WO2020118902A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811535783.8A CN109618094A (zh) 2018-12-14 2018-12-14 图像处理方法及图像处理***
CN201811535783.8 2018-12-14

Publications (1)

Publication Number Publication Date
WO2020118902A1 true WO2020118902A1 (zh) 2020-06-18

Family

ID=66010063

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075966 WO2020118902A1 (zh) 2018-12-14 2019-02-22 图像处理方法及图像处理***

Country Status (2)

Country Link
CN (1) CN109618094A (zh)
WO (1) WO2020118902A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096037A (zh) * 2021-03-31 2021-07-09 北京交通大学 一种基于深度学习的轮对光条图像的修复方法
CN117853365A (zh) * 2024-03-04 2024-04-09 济宁职业技术学院 基于计算机图像处理的艺术成果展示方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951171A (zh) * 2019-05-16 2020-11-17 武汉Tcl集团工业研究院有限公司 Hdr图像生成方法、装置、可读存储介质及终端设备
CN110298810A (zh) * 2019-07-24 2019-10-01 深圳市华星光电技术有限公司 图像处理方法及图像处理***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952239A (zh) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 图像生成方法和装置
WO2017215767A1 (en) * 2016-06-17 2017-12-21 Huawei Technologies Co., Ltd. Exposure-related intensity transformation
CN108416744A (zh) * 2018-01-30 2018-08-17 百度在线网络技术(北京)有限公司 图像处理方法、装置、设备及计算机可读存储介质
CN108492271A (zh) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 一种融合多尺度信息的自动图像增强***及方法
CN108513672A (zh) * 2017-07-27 2018-09-07 深圳市大疆创新科技有限公司 增强图像对比度的方法、设备及存储介质
CN108805836A (zh) * 2018-05-31 2018-11-13 大连理工大学 基于深度往复式hdr变换的图像校正方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607366B1 (en) * 2014-12-19 2017-03-28 Amazon Technologies, Inc. Contextual HDR determination
CN110475072B (zh) * 2017-11-13 2021-03-09 Oppo广东移动通信有限公司 拍摄图像的方法、装置、终端和存储介质
CN108681991A (zh) * 2018-04-04 2018-10-19 上海交通大学 基于生成对抗网络的高动态范围反色调映射方法及***

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017215767A1 (en) * 2016-06-17 2017-12-21 Huawei Technologies Co., Ltd. Exposure-related intensity transformation
CN106952239A (zh) * 2017-03-28 2017-07-14 厦门幻世网络科技有限公司 图像生成方法和装置
CN108513672A (zh) * 2017-07-27 2018-09-07 深圳市大疆创新科技有限公司 增强图像对比度的方法、设备及存储介质
CN108416744A (zh) * 2018-01-30 2018-08-17 百度在线网络技术(北京)有限公司 图像处理方法、装置、设备及计算机可读存储介质
CN108492271A (zh) * 2018-03-26 2018-09-04 中国电子科技集团公司第三十八研究所 一种融合多尺度信息的自动图像增强***及方法
CN108805836A (zh) * 2018-05-31 2018-11-13 大连理工大学 基于深度往复式hdr变换的图像校正方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
EILERSTEN, G.: "HDR image reconstruction from a single exposure using deep CNNS", ACM TRANSACTIONS ON GRAPHICS, vol. 36, no. 6, 20 October 2017 (2017-10-20), XP081296424 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096037A (zh) * 2021-03-31 2021-07-09 北京交通大学 一种基于深度学习的轮对光条图像的修复方法
CN113096037B (zh) * 2021-03-31 2023-08-22 北京交通大学 一种基于深度学习的轮对光条图像的修复方法
CN117853365A (zh) * 2024-03-04 2024-04-09 济宁职业技术学院 基于计算机图像处理的艺术成果展示方法
CN117853365B (zh) * 2024-03-04 2024-05-17 济宁职业技术学院 基于计算机图像处理的艺术成果展示方法

Also Published As

Publication number Publication date
CN109618094A (zh) 2019-04-12

Similar Documents

Publication Publication Date Title
WO2020118902A1 (zh) 图像处理方法及图像处理***
WO2020082593A1 (zh) 增强图像对比度的方法及其装置
CN110675328B (zh) 基于条件生成对抗网络的低照度图像增强方法及装置
US20210233210A1 (en) Method and system of real-time super-resolution image processing
CN108022223B (zh) 一种基于对数映射函数分块处理融合的色调映射方法
US9532023B2 (en) Color reproduction of display camera system
WO2017049703A1 (zh) 图像对比度增强方法
CN104301636B (zh) 低复杂度高效高动态数字图像的合成方法
Zamir et al. Learning digital camera pipeline for extreme low-light imaging
CN110211056A (zh) 基于局部中值直方图的自适应红外图像去条纹算法
CN111429433A (zh) 一种基于注意力生成对抗网络的多曝光图像融合方法
CN115223004A (zh) 基于改进的多尺度融合生成对抗网络图像增强方法
WO2017185957A1 (zh) 图像处理方法、图像处理装置及显示装置
CN110111269A (zh) 基于多尺度上下文聚合网络的低照度成像算法及装置
JP2016197853A (ja) 評価装置、評価方法およびカメラシステム
CN106023108A (zh) 基于边界约束和上下文正则化的图像去雾算法
CN114998141A (zh) 基于多分支网络的空间环境高动态范围成像方法
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
CN107169942B (zh) 一种基于鱼类视网膜机制的水下图像增强方法
WO2020107646A1 (zh) 图像处理方法
TWI604413B (zh) 影像處理方法及影像處理裝置
US20230146016A1 (en) Method and apparatus for extreme-light image enhancement
CN116563157A (zh) 一种基于深度学习的面向空间卫星低照度图像增强方法
CN110415187B (zh) 图像处理方法及图像处理***
CN107451967B (zh) 一种基于深度学习的单幅图像去雾方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19894905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19894905

Country of ref document: EP

Kind code of ref document: A1