WO2020215180A1 - 图像处理方法、装置和电子设备 - Google Patents

图像处理方法、装置和电子设备 Download PDF

Info

Publication number
WO2020215180A1
WO2020215180A1 PCT/CN2019/083693 CN2019083693W WO2020215180A1 WO 2020215180 A1 WO2020215180 A1 WO 2020215180A1 CN 2019083693 W CN2019083693 W CN 2019083693W WO 2020215180 A1 WO2020215180 A1 WO 2020215180A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
processing
type
parameters
Prior art date
Application number
PCT/CN2019/083693
Other languages
English (en)
French (fr)
Inventor
李蒙
胡慧
陈海
郑成林
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2019/083693 priority Critical patent/WO2020215180A1/zh
Priority to CN201980079484.4A priority patent/CN113168673A/zh
Publication of WO2020215180A1 publication Critical patent/WO2020215180A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Definitions

  • This application relates to the field of image processing, and more specifically, to image processing methods, devices, and electronic equipment.
  • image signal processing image signal processing
  • ISP image signal processing
  • the ISP processing flow is shown in Figure 1.
  • the natural scenery 101 obtains a Bayer image through a lens 102, then obtains an analog electrical signal 105 through a photoelectric conversion 104, and further obtains a digital image signal through denoising and analog-to-digital processing 106 (That is, the raw image) 107, which will enter the digital signal processing chip 100 next.
  • the steps in the digital signal processing chip 100 are the core steps of ISP processing.
  • the digital signal processing chip 100 generally includes black level compensation (BLC) 108, lens shading correction 109, and dead pixel correction ( bad pixel correction, BPC) 110, demosaic (demosaic) 111, Bayer domain noise reduction (denoise) 112, auto white balance (AWB) 113, Ygamma 114, auto exposure (AE) 115, auto focus (auto focus, AF) (not shown in Figure 1), color correction (CC) 116, gamma correction 117, color gamut conversion 118, color denoising/detail enhancement 119, color enhancement (color Enhance (CE) 120, formater 121, input/output (input/output, I/O) control 122 and other modules.
  • BLC black level compensation
  • BPC dead pixel correction
  • demosaic demosaic
  • Bayer domain noise reduction denoise
  • ARB auto white balance
  • AE auto exposure
  • AF auto focus
  • CE color Enhance
  • CE color Enhance
  • the color-related modules in ISP processing mainly include AWB113, CC116, CE120 and other modules.
  • AWB and CC modules are global color processing modules
  • CE is a local color processing module.
  • the color-related modules in the ISP processing can be realized by the neural network model, but because the ISP processing is serial processing, that is, the output of the previous module is used as the input of the next module, there is a problem of error accumulation.
  • the present application provides an image processing method and device, which can avoid the error accumulation problem in the serial ISP image color processing flow and improve the image color processing effect.
  • the present application provides an image processing method, which includes: acquiring an image to be processed; processing the image to be processed through the first branch of a pre-trained neural network model to obtain the first type of parameters, The first type of parameters are used to perform global color processing on the image; the image to be processed is processed through the second branch of the neural network model to obtain the second type of parameters, and the second type of parameters are used to Partial color processing is performed on the image; color processing is performed on the image to be processed according to the first type of parameter and the second type of parameter to obtain a color processed image.
  • the input of the neural network model is the image to be processed, and different branches output different types of parameters, which can avoid the problem of error accumulation in the parameter calculation process and obtain more accurate parameters. Further, processing the image to be processed according to the obtained parameters can improve the color correction effect of the image.
  • the image to be processed is a raw image.
  • the input of the neural network model is a raw image, which preserves the image information to the greatest extent, so that the first type of parameters and the second type of parameters obtained are more accurate, and the image color correction effect is also better.
  • the global color processing includes automatic white balance and/or color correction
  • the local color processing includes color rendering and/or color enhancement
  • the first type of parameter and the second type of parameter can correspond to the traditional ISP module, so that when the image color correction effect is not ideal, the first type of parameter and the second type of parameter can be adjusted according to the experience of traditional ISP debugging , which solves the inherent problem that the subjective effect of neural network cannot be adjusted.
  • the first branch and the second branch share a shared parameter layer of the neural network model.
  • the first branch obtains the intermediate feature layer data layer and the second branch obtains the intermediate feature layer data.
  • the layers of the middle feature layer data can share structural parameters. For example, through the shared parameter layer of the pre-trained neural network model, the image to be processed is processed to obtain the intermediate feature layer data; through the first branch of the pre-trained neural network model (except for the shared parameter The part other than the layer) process the intermediate feature layer data to obtain the first type of parameters; the intermediate feature layer data is processed through the second branch of the neural network model (other than the shared parameter layer) Processing to obtain the second type of parameters.
  • the second branch of the neural network model in the above technical solution can directly use the shared parameter layer to obtain the intermediate feature layer data, that is, the first branch and the second branch can reuse part of the calculation process, so the calculation complexity can be reduced Degree, reduce the occupation of storage space.
  • the first-type parameters are in matrix form; the performing color processing on the to-be-processed image according to the first-type and second-type parameters includes: The image and the first type of parameters are subjected to matrix multiplication to obtain a first image; according to the second type of parameters, local color processing is performed on the first image to obtain a second image.
  • the performing local color processing on the first image according to the second type of parameters includes: calculating the difference between the value of the color channel of the first image and the value of the brightness channel Value; adjust the difference value according to the second type of parameter; add the adjusted difference value to the value of the brightness channel of the first image.
  • the image format of the first image is a color RGB format
  • the second type of parameters include color processing coefficient beta1 of color channel R, color processing coefficient beta2 of color channel G, and color channel B
  • the color processing coefficient beta3; said performing partial color processing on the first image according to the second type of parameters includes: according to the formula Perform local color adjustment on the first image, where Y" is the value of the brightness channel of the first image, and R"', G"', and B"' are the values of the color channel R of the second image, respectively ,
  • the value of the color channel G and the value of the color channel B, R", G", and B" are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the first image, respectively.
  • the method before performing color processing on the image to be processed according to the parameters of the first type and the parameter of the second type, the method further includes: performing demosaicing and processing on the image to be processed. To noise processing.
  • the present application provides an image processing device, which includes a module for executing the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides an image processing device, including a memory and a processor, configured to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides a chip, which is connected to a memory, and is used to read and execute a software program stored in the memory to implement the first aspect or any one of the first aspects.
  • the present application provides an electronic device including a processor and a memory, configured to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides a computer-readable storage medium, including instructions, which when run on an electronic device, cause the electronic device to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • the present application provides a computer program product that, when running on an electronic device, causes the electronic device to execute the method described in the first aspect or any one of the implementation manners of the first aspect.
  • FIG. 1 is a schematic flowchart of ISP processing.
  • Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • Fig. 3 is a specific example of the image processing method of the embodiment of the present application.
  • Fig. 4 is a network framework of a neural network model of an embodiment of the present application.
  • Fig. 5 is a network framework of a neural network model according to another embodiment of the present application.
  • Fig. 6 is a network framework of a neural network model according to another embodiment of the present application.
  • Fig. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by another embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the technical solution of the present application can be applied to any scene that requires color processing of images, such as safe city, remote driving, human-computer interaction, and other scenes that require photographing, video recording, or image display.
  • Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • the method 200 can be executed by any device with image processing functions. As shown in FIG. 2, the method 200 may include at least part of the following content.
  • an image to be processed is acquired.
  • the image to be processed is processed by the first branch of the pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used to perform global color processing on the image.
  • the image to be processed is processed through the second branch of the neural network model to obtain a second type of parameter, and the second type of parameter is used to perform local color processing on the image.
  • color processing is performed on the image to be processed according to the first type parameters and the second type parameters to obtain a color processed image.
  • the inputs of different branches of the neural network model are all images to be processed, and with the same input, different types of parameters are output, that is, the first branch and the second branch of the neural network model have With the same input, the processing of the first branch and the second branch of the image to be processed is parallel processing. Compared with the serial processing, the problem of error accumulation in the parameter calculation process can be avoided and more accurate parameters can be obtained. Further, processing the image to be processed according to the obtained parameters can improve the color correction effect of the image. And the output of the neural network model is the color processing parameters.
  • the above parameters can correspond to the parameters of the traditional ISP processing module. When the image color processing effect is not ideal, the parameters can be fine-tuned according to the traditional ISP parameter debugging experience to further improve the image color correction effect. And solve the problem that the most criticized parameters of the neural network cannot be debugged.
  • the image to be processed in the embodiment of the present application may be obtained image data, image signal, etc.
  • the image to be processed may be acquired through image acquisition equipment (for example, a lens and a sensor), or may be received from other equipment, which is not specifically limited in the embodiment of the present application.
  • the image to be processed may be a raw image. Since raw image is a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor that converts the captured light source signal into a digital signal to obtain the original data, it is lossless, So it contains the original information of the object.
  • the input of the neural network model is a raw image, which preserves the image information to the greatest extent.
  • the first type of parameters and the second type of parameters obtained in this way can remind the information of the real scene, and the image color correction effect is also better.
  • the image to be processed may also be an image after other image processing processes besides color processing. Other image processing procedures include any one or a combination of black level correction, lens shading correction, dead pixel correction, demosaicing, Bayer domain noise reduction, auto exposure, auto focus, etc.
  • the pre-trained neural network model of the embodiment of the application can be stored on an image processing device, for example, stored in a mobile phone terminal, a tablet computer, a notebook computer, augmented reality (AR) AR/virtual reality (VR) , On-board terminal, etc.
  • the pre-trained neural network model of the embodiment of the present application can also be stored in a server or cloud.
  • the pre-trained neural network model can be a corresponding target model/rule generated based on different training data for different goals (or different tasks), and the corresponding target model/rule can be used to achieve the above goals or complete The above tasks provide the user with the desired result.
  • the first type of parameters and the second type of parameters for image color processing may be output.
  • the first branch of the neural network model corresponds to generating the first type of parameters (that is, the parameters used to perform global color processing on the image)
  • the second branch of the neural network model corresponds to generating the second type of parameters ( That is, the parameters used to perform partial color processing on the image).
  • the first branch and the second branch can be two independent neural network models.
  • the neural network model of the embodiment of the present application is a set consisting of a first neural network model corresponding to the first branch and a second neural network model corresponding to the second branch, and the first neural network model and the second neural network model
  • the inputs of the two neural network models are all images to be processed; the first branch and the second branch may also be two parts of the same neural network model, which is not specifically limited in the embodiment of the present application.
  • the first branch and the second branch may multiplex or share part of the processing procedure.
  • the first branch and the second branch share a shared parameter layer of the neural network model.
  • the first branch obtains the intermediate feature layer data layer and the second branch obtains the intermediate feature layer data.
  • the layers of the middle feature layer data can share structural parameters.
  • the image to be processed is processed to obtain the intermediate feature layer data; through the first branch of the pre-trained neural network model (except for the shared parameter The part other than the layer) process the intermediate feature layer data to obtain the first type of parameters; the intermediate feature layer data is processed through the second branch of the neural network model (other than the shared parameter layer) Processing to obtain the second type of parameters.
  • the second branch of the neural network model can directly use the shared parameter layer to obtain the intermediate feature layer data, that is, the first branch and the second branch can reuse part of the calculation process, so the calculation complexity can be reduced , Reduce the occupation of storage space.
  • the first type of parameters is used to perform global color processing on the image, where the global color processing includes at least one global color processing.
  • the first type of parameters are used to perform automatic white balance processing and/or color correction processing on the image.
  • Global color processing is to process the entire image.
  • the first type of parameters may include M parameters, the M parameters corresponding to N types of global color processing, and both M and N are integers greater than or equal to 1.
  • the relationship between the M parameters and the global color processing in N may be one-to-one correspondence, one-to-many, or many-to-one, which is not specifically limited in the embodiment of the present application.
  • the first type of parameters may include the first parameter corresponding to the automatic white balance processing and/or the second parameter corresponding to the color correction processing.
  • Type parameters the first type of parameters may also include the first parameter corresponding to the automatic white balance processing and the third parameter of the color correction processing; the first type of parameters may also include the fourth parameter and the fifth parameter corresponding to the automatic white balance processing , And the sixth parameter corresponding to the color correction process.
  • the first type of parameters may be in matrix form. Taking the first type of parameters for performing automatic white balance processing and color correction processing on an image as an example, the first type of parameters includes a matrix used for automatic white balance processing and a matrix used for color correction processing.
  • the first type of parameters are in the form of a matrix
  • color processing is performed on the image to be processed according to the first type of parameters, which may be a matrix multiplication of the first type of parameters and the image to be processed.
  • the automatic white balance processing of the image to be processed can be processed according to the following formula:
  • a, b, and c can be determined by the neural network model.
  • R', G', and B' are respectively the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image after automatic white balance processing.
  • R, G, and B are respectively the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image before the automatic white balance processing.
  • R , G, B represent the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image before the one or more color processing, and will not be explained below.
  • the color correction processing of the image to be processed can be processed according to the following formula:
  • R', G', and B' are the values of color channel R, color channel G, and color channel B of the image after color correction processing.
  • R, G, and B are the values before color correction processing.
  • the embodiments of the present application may also use the following quadratic terms, cubic terms, and square root terms to perform color correction processing:
  • ⁇ 2,3 (R,G,B,R 2 ,G 2 ,B 2 ,RG,GB,RB) T
  • R, G, and B are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the image before the color correction processing, and T represents the transposition.
  • the matrix used for color correction will also have different formats. Take the color correction matrix with quadratic terms as an example.
  • the color correction processing of the image to be processed can be processed according to the following formula, which is used for color correction at this time
  • the matrix M can be a 3*10 matrix:
  • R', G', B' are the value of the color channel R, the value of the color channel G, and the value of the channel B of the image after the color correction processing
  • R, G, B are the image before the color correction processing respectively The value of color channel R, the value of color channel G, and the value of color channel B.
  • the at least one global color process described above may also only correspond to one matrix, which is not specifically limited in the embodiment of the present application.
  • the second type of parameters is used to perform local color processing on the image, where the local color processing includes at least one local color processing. Partial color processing is to process part or part of an image.
  • the second type of parameters are used to perform color enhancement or color rendering processing on parts of the image.
  • the relationship between the second type of parameters and the at least one partial color processing may be one-to-one correspondence, one-to-many or many-to-one, which is not specifically limited in the embodiment of the present application.
  • the second type of parameters may be filter function parameters, color adjustment coefficients, and so on.
  • the embodiment of the present application does not specifically limit the sequence of performing global color processing and local color processing.
  • the first type of parameters are in matrix form as an example, the image to be processed and the first type of parameters are matrix multiplied to obtain the first image; according to the second type of parameters, all The first image is subjected to partial color processing to obtain a second image.
  • the second type of parameter can be multiplied by the difference, the second type of parameter and the difference can be input into a pre-configured function, the second type of parameter can be added to the difference, and so on.
  • the embodiment of the present application does not specifically limit the image format of the first image.
  • it may be a color RGB format, a YUV format, and the like.
  • the second type of parameters may include color processing coefficients of color channel R, color processing coefficients of color channel G, and color processing coefficients of color channel B.
  • the parameters can be calculated according to the following formula Partial color adjustment of the first image:
  • R is the value of the brightness channel of the first image
  • R”, G”, and B are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the second image
  • R, G , B are the value of the color channel R of the first image
  • beta1, beta2, and beta3 are the color processing coefficient of the color channel R, the color processing coefficient of the color channel G, and the color The color processing coefficient of channel B.
  • some basic processing such as denoising and demosaicing may be performed on the image to be processed.
  • Fig. 3 is a specific example of the image processing method of the embodiment of the present application. It should be understood that FIG. 3 is only exemplary, which is only used to help those skilled in the art understand the embodiments of the present application, rather than limiting the embodiments of the present application to the specific scenarios illustrated.
  • the raw image is processed by demosaicing and denoising to obtain a linear RGB image.
  • the raw image enters the pre-trained neural network model, and the neural network model is processed to obtain a global color correction matrix M and a local color processing coefficient beta, where beta can be divided into 3 channels R, G, and B, and the size is the same as the original image.
  • the linear RGB image is processed by the global color matrix M to obtain the R"G"B" image.
  • R"G"B" image is the result of global color correction (for example, automatic white balance and/or color correction), and then after partial color processing, local color processing (for example, color rendering and/or color enhancement) is obtained The R"'G"'B"' image.
  • the above image color processing method focuses on global color processing and local color processing at the same time, and uses the same neural network model to complete all color processing from the raw image of the sensor to the final image. Since the adjustment parameters used in each color processing are obtained using raw image, firstly, the problem of error accumulation can be avoided. Secondly, because the raw image is larger and retains the image information, the obtained adjustment parameters are more accurate.
  • Figures 4 to 6 show the three structural forms of the neural network model of the embodiments of the present application. It should be understood that Figures 4 to 6 are only exemplary, and are only used to help those skilled in the art understand the embodiments of the present application. It is not intended to limit the embodiments of the present application to the specific scenarios illustrated.
  • the neural network model of the embodiment of the present application may also be in other structural forms, as long as it can be a method for implementing the embodiment of the present application.
  • the neural network model 400 in Figure 4 includes max pooling 401, convolution 402, deconvolution 403, connect 404, global pooling 405, and full connection (full connect) 406, reshape (reshape) 407 and other processing.
  • the role of convolution 402 in image processing is equivalent to a filter that extracts specific information from the input image matrix.
  • the convolution layer includes multiple convolution operators.
  • the convolution operator is also called the kernel.
  • the convolution operator can essentially be a weight matrix. This weight matrix is usually predefined. During the convolution operation on the image , The weight matrix is usually one pixel after one pixel in the horizontal direction on the input image (or two pixels after two pixels, or three pixels after three pixels, etc.). The number of interval pixels depends on the stride Value) to complete the work of extracting specific features from the image.
  • Deconvolution 403 is also called transposed convolution, which is the inverse process of convolution 402.
  • pooling processing which can be a layer of convolution processing followed by a layer of pooling processing, or a multi-layer convolution processing followed by one or more layers of pooling processing.
  • the pooling process may include average pooling using an average pooling operator and/or maximum pooling 401 using a maximum pooling operator to sample the input image to obtain a smaller size image.
  • the average pooling operator can calculate the pixel values in the image within a specific range to generate an average value as the result of average pooling.
  • the maximum pooling operator can take the pixel with the largest value within a specific range as the result of the maximum pooling.
  • the embodiment of the present application adopts the maximum pooling 401.
  • the first branch of the neural network model 400 starts with 4 layers of image data with a size of M*N.
  • the second branch starts with 4 layers of image data with a size of M*N.
  • 512 layers of M/16*N/16 image data ie, intermediate feature layer data
  • Deconvolution 403 processing, convolution 402 processing, and connection 404 processing and finally get the second type of parameters.
  • the inputs of the first branch and the second branch are both 4 layers of image data with a size of M*N.
  • the first branch and the second branch of the neural network model 400 can be multiplexed starting from 4 layers of image data with a size of M*N, after convolution 402 processing, and maximum pooling 401 processing to obtain 512 layers of M/16* N/16 image data (that is, the middle feature layer data) part.
  • the neural network model 500 in Figure 5 includes max pooling 501, convolution 502, tiling 503, connect 504, global pooling 505, and full connection ( Full connect 506, reshape 507 and other processing.
  • max pooling 501 convolution 502, tiling 503, connect 504, global pooling 505, and full connection ( Full connect 506, reshape 507 and other processing.
  • the first branch of the neural network model 500 starts with 4 layers of image data with a size of M*N.
  • the second branch starts with 4 layers of image data with a size of M*N.
  • 512 layers of 1*1 image data ie, intermediate feature layer data
  • tiling503 512-layer M*N image data is obtained.
  • the second branch also performs convolution 502 on the 4-layer M*N image data without changing the image size to obtain 512-layer M*N image data.
  • Part of the 512-layer M*N image data is connected 504 to obtain 1024-layer M*N image data, which is further processed by convolution 402 to finally obtain the second type of parameters.
  • the inputs of the first branch and the second branch are both 4 layers of image data with a size of M*N.
  • the first branch and the second branch of the neural network model 500 can be multiplexed starting from 4 layers of image data with a size of M*N, and processed through convolution 502, maximum pooling 501, and global pooling 505. 512 layer 1*1 image data (that is, the middle feature layer data) part.
  • the neural network model 600 in Figure 6 includes max pooling 601, convolution 602, global pooling 605, full connect 606, reshape 607, etc. .
  • max pooling 601 convolution 602
  • global pooling 605 full connect 606, reshape 607, etc.
  • the first branch of the neural network model 600 starts with 4 layers of image data with a size of M*N, and is processed by convolution 602 to obtain 32 layers of M*N image data (ie, intermediate feature layer data). Pooling 601 processing, global pooling 605 processing, fully connected 606 processing, reshaping 607 and other processing, finally get the first type of parameters.
  • the second branch starts with 4 layers of image data with a size of M*N, and is processed by convolution 602 to obtain 32 layers of M*N image data (ie, intermediate feature layer data), and further processed by convolution 602 to obtain the second type of parameters.
  • the inputs of the first branch and the second branch are both 4 layers of image data with a size of M*N.
  • the first branch and the second branch of the neural network model 600 can be multiplexed starting from 4 layers of image data with a size of M*N, and processed by convolution 602 to obtain 32 layers of M*N image data (ie, intermediate features). Layer data).
  • Fig. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. As shown in FIG. 7, the apparatus 700 includes an acquisition module 710 and a processing module 720.
  • the acquiring module 710 is configured to acquire an image to be processed.
  • the processing module 720 is configured to process the image to be processed through the first branch of the pre-trained neural network model to obtain a first type of parameter, and the first type of parameter is used to perform global color processing on the image.
  • the processing module 720 is further configured to process the to-be-processed image through the second branch of the neural network model to obtain a second type of parameter, and the second type of parameter is used to perform local color processing on the image.
  • the processing module 720 is further configured to perform color processing on the image to be processed according to the first-type parameters and the second-type parameters to obtain a color-processed image.
  • the image to be processed is a raw image.
  • the global color processing includes automatic white balance and/or color correction
  • the local color processing includes color rendering and/or color enhancement
  • the first branch and the second branch share a shared parameter layer of the neural network model.
  • the first branch obtains the intermediate feature layer data layer and the second branch obtains the intermediate feature layer data.
  • the layers of the middle feature layer data can share structural parameters. For example, through the shared parameter layer of the pre-trained neural network model, the image to be processed is processed to obtain the intermediate feature layer data; through the first branch of the pre-trained neural network model (not including shared Parameter layer part) to process the intermediate feature layer data to obtain the first type of parameters; process the intermediate feature layer data through the second branch of the neural network model (excluding the shared parameter layer part), Obtain the second type of parameters.
  • the first-type parameters are in matrix form; the processing module 720 is specifically configured to perform matrix multiplication on the image to be processed and the first-type parameters to obtain a first image; For the second type of parameters, local color processing is performed on the first image to obtain a second image.
  • the processing module 720 is specifically configured to calculate the difference between the value of the color channel of the first image and the value of the brightness channel; adjust the difference according to the second type of parameter; The difference value is added to the value of the brightness channel of the first image.
  • the image format of the first image is a color RGB format
  • the second type of parameters include color processing coefficient beta1 of color channel R, color processing coefficient beta2 of color channel G, and color processing coefficient beta3 of color channel B
  • the processing module 720 is specifically configured to perform according to the formula Perform local color adjustment on the first image, where Y" is the value of the brightness channel of the first image, and R"', G"', and B"' are the values of the color channel R of the second image, respectively ,
  • the value of the color channel G and the value of the color channel B, R", G", and B" are the value of the color channel R, the value of the color channel G, and the value of the color channel B of the first image, respectively.
  • the acquisition module 710 may be implemented by a transceiver or a processor.
  • the processing module 720 may be implemented by a processor. The specific functions and beneficial effects of the obtaining module 710 and the processing module 720 can be referred to the method shown in FIG. 2, which will not be repeated here.
  • FIG. 8 is a schematic structural diagram of an image processing device provided by another embodiment of the present application. As shown in FIG. 8, the apparatus 800 may include a processor 820 and a memory 830.
  • FIG 8. Only one memory and processor are shown in Figure 8. In actual image processing device products, there may be one or more processors and one or more memories.
  • the memory may also be referred to as a storage medium or storage device.
  • the memory may be set independently of the processor, or may be integrated with the processor, which is not limited in the embodiment of the present application.
  • the processor 820 and the memory 830 communicate with each other through internal connection paths, and transfer control and/or data signals.
  • the processor 820 is used to obtain the image to be processed; and used to process the image to be processed through the first branch of the pre-trained neural network model to obtain the first type of parameters, and the first type of parameters are used for Perform global color processing on the image; it is also used to process the image to be processed through the second branch of the neural network model to obtain a second type of parameter, which is used to perform local color processing on the image ; Is also used to perform color processing on the image to be processed according to the first type of parameter and the second type of parameter to obtain a color processed image.
  • the memory 830 described in the embodiments of the present application is used to store computer instructions and parameters required by the processor to run.
  • the embodiment of the present application also provides an electronic device, which may be a terminal device.
  • the device can be used to execute the functions/steps in the above method embodiments.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 900 includes a processor 910 and a transceiver 920.
  • the electronic device 900 may further include a memory 930.
  • the processor 910, the transceiver 920, and the memory 930 can communicate with each other through an internal connection path to transfer control and/or data signals.
  • the memory 930 is used to store computer programs, and the processor 910 is used to download from the memory 930. Call and run the computer program.
  • the electronic device 900 may further include an antenna 940 for transmitting the wireless signal output by the transceiver 920.
  • the above-mentioned processor 910 and the memory 930 may be integrated into a processing device, and more commonly, they are components independent of each other.
  • the processor 910 is configured to execute program codes stored in the memory 930 to implement the above-mentioned functions.
  • the memory 930 may also be integrated in the processor 910, or independent of the processor 910.
  • the processor 910 may correspond to the processing module 820 in the apparatus 800 in FIG. 8.
  • the electronic device 900 may also include one or more of an input unit 960, a display unit 970, an audio circuit 980, a camera 990, and a sensor 901.
  • the audio The circuit may also include a speaker 982, a microphone 984, and so on.
  • the display unit 970 may include a display screen.
  • the aforementioned electronic device 900 may further include a power supply 950 for providing power to various devices or circuits in the terminal device.
  • the electronic device 900 shown in FIG. 9 can implement each process of the method embodiments shown in FIGS. 2 to 6.
  • the operations and/or functions of each module in the electronic device 900 are respectively for implementing the corresponding processes in the foregoing method embodiments.
  • the processor described in each embodiment of the present application may be an integrated circuit chip with signal processing capability. In the implementation process, the steps of the above method can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the processor described in each embodiment of the present application may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (ASIC), and a field programmable gate array (field programmable gate array). , FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory (RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory, or electrically erasable programmable memory, registers, etc. mature in the field Storage medium.
  • the storage medium is located in the memory, and the processor reads the instructions in the memory and completes the steps of the above method in combination with its hardware.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute the implementation process of the embodiments of this application. Any restrictions.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Color Image Communication Systems (AREA)

Abstract

本申请提供了图像处理方法、装置和电子设备。在本申请的技术方案中,获取待处理图像;通过预训练的神经网络模型的第一分支对该待处理图像进行处理,以得到第一类参数,该第一类参数用于对图像进行全局色彩处理;通过该神经网络模型的第二分支对该待处理图像进行处理,以得到第二类参数,该第二类参数用于对图像进行局部色彩处理;根据该第一类参数和第二类参数对该待处理图像进行色彩处理,以获得色彩处理后的图像。在上述技术方案中,神经网络模型的输入为待处理图像,不同分支输出不同类型的参数,可以避免在参数计算过程中的误差累积问题,得到更准确的参数。进一步地,再根据得到参数对待处理图像进行处理,可以提高图像色彩矫正效果。

Description

图像处理方法、装置和电子设备 技术领域
本申请涉及图像处理领域,并且更具体地,涉及图像处理方法、装置和电子设备。
背景技术
图像信号处理(image signal processing,ISP)主要作用是对前端图像传感器输出的图像信号进行后期处理。依赖于ISP,在不同的光学条件下得到的图像才能较好的还原现场细节。
ISP处理流程如图1所示,自然景物101通过镜头(lens)102获得拜耳(bayer)图像,然后通过光电转换104得到模拟电信号105,进一步通过消噪和模拟转数字处理106获得数字图像信号(即原始图像(raw image))107,接下来会进入数字信号处理芯片100中。在数字信号处理芯片100中的步骤是ISP处理的核心步骤,数字信号处理芯片100一般包含黑电平矫正(black level compensation,BLC)108、镜头阴影矫正(lens shading correction)109、坏点矫正(bad pixel correction,BPC)110、去马赛克(demosaic)111、拜耳域降噪(denoise)112、自动白平衡(auto white balance,AWB)113、Ygamma114、自动曝光(auto exposure,AE)115、自动对焦(auto focus,AF)(图1中未示出)、色彩矫正(color correction,CC)116、伽玛(gamma)矫正117、色域转换118、色彩去噪/细节增强119、色彩增强(color enhance,CE)120、编织器(formater)121、输入输出(input/output,I/O)控制122等模块。
ISP处理中与色彩相关的模块主要包括AWB113、CC116、CE120等几个模块。其中AWB和CC模块是全局色彩处理模块,CE是局部色彩处理模块。ISP处理中与色彩相关的模块可以由神经网络模型实现,但是由于ISP处理为串行处理,即上一个模块的输出作为下一个模块的输入,存在误差累积的问题。
发明内容
本申请提供图像处理的方法和装置,可以避免串行的ISP图像色彩处理流程中的误差累积问题,提高图像色彩处理效果。
第一方面,本申请提供了一种图像处理方法,该方法包括:获取待处理图像;通过预训练的神经网络模型的第一分支对所述待处理图像进行处理,以得到第一类参数,所述第一类参数用于对图像进行全局色彩处理;通过所述神经网络模型的第二分支对所述待处理图像进行处理,以得到第二类参数,所述第二类参数用于对图像进行局部色彩处理;根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理,以获得色彩处理后的图像。
在上述技术方案中,神经网络模型的输入为待处理图像,不同分支输出不同类型的参数,可以避免在参数计算过程中的误差累积问题,得到更准确的参数。进一步地,再根据 得到参数对待处理图像进行处理,可以提高图像色彩矫正效果。
在一种可能的实现方式中,所述待处理图像为raw图像。
在上述技术方案中,神经网络模型的输入为raw图像,这样最大程度的保留了图像的信息,这样得到的第一类参数和第二类参数更准确,进而图像色彩矫正效果也更好。
在一种可能的实现方式中,所述全局色彩处理包括自动白平衡和/或色彩校正,所述局部颜色处理包括色彩渲染和/或色彩增强。
在上述技术方案中,第一类参数和第二类参数可以对应到传统ISP模块,便于在图像色彩矫正效果不理想时,根据传统ISP调试的经验对第一类参数和第二类参数进行调整,解决了神经网络存在主观效果不可调整的固有问题。
在一种可能的实现方式中,所述第一分支和所述第二分支共享所述神经网络模型的共享参数层。
可以理解地,在获得第一类参数和第二类参数都需要得到中间特征层数据的情况下,所述第一分支得到所述中间特征层数据的层和所述第二分支的得到所述中间特征层数据的层可以共享结构参数。例如,通过所述预训练的神经网络模型的共享参数层,对所述待处理图像进行处理,得到所述中间特征层数据;通过所述预训练的神经网络模型的第一分支(除共享参数层以外的部分)对所述中间特征层数据进行处理,得到所述第一类参数;通过所述神经网络模型的第二分支(除共享参数层以外的部分)对所述中间特征层数据进行处理,得到所述第二类参数。
上述技术方案中神经网络模型的第二分支可以直接使用共享参数层,得到所述中间特征层数据,也就是说,第一分支和第二分支可以复用部分计算过程,因此可以降低计算的复杂度、减少存储空间的占用。
在一种可能的实现方式中,所述第一类参数为矩阵形式;所述根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理,包括:将所述待处理图像与所述第一类参数进行矩阵乘法,以得到第一图像;根据所述第二类参数,对所述第一图像进行局部色彩处理,以得到第二图像。
在一种可能的实现方式中,所述根据所述第二类参数,对所述第一图像进行局部色彩处理,包括:计算所述第一图像的色彩通道的数值和亮度通道的数值的差值;根据所述第二类参数调整所述差值;将调整后的所述差值加到所述第一图像的亮度通道的数值上。
在一种可能的实现方式中,所述第一图像的图像格式为彩色RGB格式,所述第二类参数包括色彩通道R的色彩处理系数beta1、色彩通道G的色彩处理系数beta2、色彩通道B的色彩处理系数beta3;所述根据所述第二类参数,对所述第一图像进行局部色彩处理,包括:根据公式
Figure PCTCN2019083693-appb-000001
对所述第一图像进行局部色彩调整,其中Y”为所述第一图像的亮度通道的数值,R”'、G”'、B”'分别为所述第二图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R”、G”、B”分别为所述第一图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
在一种可能的实现方式中,在根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理之前,所述方法还包括:对所述待处理图像进行去马赛克处理和去噪声处理。
第二方面,本申请提供图像处理装置,包括用于执行第一方面或第一方面任意一种实现方式中的模块。
第三方面,本申请提供图像处理装置,包括存储器和处理器,用于执行第一方面或第一方面任意一种实现方式所述的方法。
第四方面,本申请提供了一种芯片,所述芯片与存储器相连,用于读取并执行所述存储器中存储的软件程序,以实现第一方面或第一方面任意一种实现方式所述的方法。
第五方面,本申请提供了一种电子设备,包括处理器和存储器,用于执行第一方面或第一方面任意一种实现方式所述的方法。
第六方面,本申请提供了一种计算机可读存储介质,包括指令,当其在电子设备上运行时,使得电子设备执行第一方面或第一方面任意一种实现方式所述的方法。
第七方面,本申请提供了一种计算机程序产品,当其在电子设备上运行时,使得电子设备执行第一方面或第一方面任意一种实现方式所述的方法。
附图说明
图1是ISP处理的示意性流程图。
图2是本申请实施例的图像处理方法的示意性流程图。
图3是本申请实施例的图像处理方法的具体的例子。
图4是本申请实施例的神经网络模型的网络框架。
图5是本申请另一实施例的神经网络模型的网络框架。
图6是本申请另一实施例的神经网络模型的网络框架。
图7是本申请实施例提供的图像处理装置的示意性结构图。
图8是本申请另一实施例提供的图像处理装置的示意性结构图。
图9是本申请实施例提供的电子设备的示意性结构图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请的技术方案可以应用在任意需要对图像进行色彩处理的场景,例如平安城市、远程驾驶、人机交互等需要拍照、录像或显示图像的场景。
应理解,本申请中术语“***”和“网络”在本文中常被可互换使用。本申请中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
图2是本申请实施例的图像处理方法的示意性流程图。方法200可以由任意具备图像处理功能的装置来执行。如图2所示,方法200可以包括以下内容的至少部分内容。
在210中,获取待处理图像。
在220中,通过预训练的神经网络模型的第一分支对所述待处理图像进行处理,以得到第一类参数,所述第一类参数用于对图像进行全局色彩处理。
在230中,通过所述神经网络模型的第二分支所述待处理图像进行处理,以得到第二类参数,所述第二类参数用于对图像进行局部色彩处理。
在240中,根据所述第一类参数和所述第二类参数对所述待处理图像进行色彩处理,以获得色彩处理后的图像。
在方法200中,神经网络模型的不同分支的输入均为待处理图像,在具有相同的输入的情况下,输出不同类型的参数,也就是说,神经网络模型的第一分支和第二分支具有相同的输入,这样第一分支和第二分支对待处理图像的处理过程为并行处理,相较于串行处理过程可以避免在参数计算过程中的误差累积问题,得到更准确的参数。进一步地,再根据得到参数对待处理图像进行处理,可以提高图像色彩矫正效果。并且神经网络模型的输出为色彩处理参数,上述参数可以对应传统ISP处理模块对应的参数,当图像色彩处理效果不理想时,可以根据传统ISP参数调试经验进行参数微调,进一步提高图像色彩矫正效果,并解决神经网络被诟病最大的参数无法调试的问题。
本申请实施例的待处理图像可以是获取得到的图像数据、图像信号等。在实际的应用中,待处理图像可以是通过图像采集设备(例如,镜头和传感器)获取的,也可以是从其他设备接收得到的,本申请实施例不作具体限定。
待处理图像可以是原始图像(raw image)。由于raw image是互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)或者电行耦合元件(charge coupled device,CCD)图像传感器将捕捉到的光源信号转化为数字信号得到的原始数据,是无损的,故其包含了物体原始的信息。神经网络模型的输入为raw图像,这样最大程度的保留了图像的信息,这样得到的第一类参数和第二类参数能提醒真实场景的信息,进而图像色彩矫正效果也更好。当然,待处理图像也可以是经过除色彩处理之外的其他图像处理过程之后的图像。其他图像处理过程包括黑电平矫正、镜头阴影矫正、坏点矫正、去马赛克、拜耳域降噪、自动曝光、自动对焦等中的任意一个或任意多个的组合。
本申请实施例的预训练的神经网络模型可以存储在图像处理装置上,例如,存储在手机终端、平板电脑、笔记本电脑、增强现实(augmented reality,AR)AR/虚拟现实(virtual reality,VR)、车载终端上等。本申请实施例的预训练的神经网络模型还可以存储在服务器或者云端等。预训练的神经网络模型可以是针对不同的目标(或称不同的任务),基于不同的训练数据生成的相应的目标模型/规则,该相应的目标模型/规则即可以用于实现上述目标或完成上述任务,从而为用户提供所需的结果。例如,在本申请实施例中可以是输出用于图像色彩处理的第一类参数和第二类参数。
神经网络模型的不同分支可以理解为神经网络模型的不同处理过程。在本申请实施例中,神经网络模型的第一分支对应于生成第一类参数(即用于对图像进行全局色彩处理的参数),神经网络模型的第二分支对应于生成第二类参数(即用于对图像进行局部色彩处理的参数)。
第一分支和第二分支可以为两个独立的神经网络模型。也就是说,本申请实施例的神经网络模型是由与第一分支对应的第一神经网络模型,以及与第二分支对应的第二神经网络模型构成的集合,而第一神经网络模型和第二神经网络模型的输入均为待处理图像;第一分支和第二分支也可以是同一个神经网络模型的两个部分,本申请实施例不作具体限定。
可选地,第一分支和第二分支可以复用或者共用部分处理过程。例如,所述第一分支和所述第二分支共享所述神经网络模型的共享参数层。可以理解地,在获得第一类参数和 第二类参数都需要得到中间特征层数据的情况下,所述第一分支得到所述中间特征层数据的层和所述第二分支的得到所述中间特征层数据的层可以共享结构参数。例如,通过所述预训练的神经网络模型的共享参数层,对所述待处理图像进行处理,得到所述中间特征层数据;通过所述预训练的神经网络模型的第一分支(除共享参数层以外的部分)对所述中间特征层数据进行处理,得到所述第一类参数;通过所述神经网络模型的第二分支(除共享参数层以外的部分)对所述中间特征层数据进行处理,得到所述第二类参数。这样,神经网络模型的第二分支可以直接使用共享参数层,得到的所述中间特征层数据,也就是说,第一分支和第二分支可以复用部分计算过程,因此可以降低计算的复杂度、减少存储空间的占用。
第一类参数用于对图像进行全局色彩处理,其中全局色彩处理包括至少一种全局色彩处理。例如,第一类参数用于对图像进行自动白平衡处理和/或色彩矫正处理。全局色彩处理是对整张图像进行处理。可选地,第一类参数可以包括M个参数,所述M个参数对应于N种全局色彩处理,M和N均为大于或者等于1的整数。M个参数与N中全局色彩处理的关系可以是一一对应、一对多或者多对一,本申请实施例不作具体限定。以第一类参数用于对图像进行自动白平衡处理和/或色彩矫正处理为例,第一类参数可以包括对应于自动白平衡处理的第一参数和/或对应于色彩矫正处理的第二类参数;第一类参数也可以包括对应于自动白平衡处理的第一参数和色彩矫正处理的第三参数;第一类参数也可以包括对应于自动白平衡处理的第四参数和第五参数,以及对应于色彩矫正处理的第六参数等。当上述至少一种全局色彩处理对应于多个参数时,本申请实施例对所述至少一种全局色彩处理的执行顺序不做限定。可选地,第一类参数可以是矩阵形式。以第一类参数用于对图像进行自动白平衡处理和色彩矫正处理为例,第一类参数包括用于自动白平衡处理的矩阵和用于色彩矫正处理的矩阵。
当第一类参数为矩阵形式时,根据第一类参数对待处理图像进行色彩处理,可以是将第一类参数与待处理图像进行矩阵乘法。
例如,对待处理图像进行自动白平衡处理可以按照以下公式处理:
Figure PCTCN2019083693-appb-000002
其中a、b、c可以由神经网络模型确定,R'、G'、B'分别为经过自动白平衡处理后的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R、G、B分别为经过自动白平衡处理前的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。在本申请实施例中,统一用R'、G'、B'表示经过一种或多种全局色彩处理后的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R、G、B表示经过所述一种或者多种色彩处理前的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,下文不再进行解释。
例如,对待处理图像进行色彩矫正处理可以按照以下公式处理:
Figure PCTCN2019083693-appb-000003
其中R'、G'、B'分别为经过色彩矫正处理后的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R、G、B分别为经过色彩矫正处理前的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
本申请实施例还可以使用如下的二次项、三次项、开方项等进行色彩矫正处理:
ρ 2,3=(R,G,B,R 2,G 2,B 2,RG,GB,RB) T
Figure PCTCN2019083693-appb-000004
Figure PCTCN2019083693-appb-000005
其中R、G、B分别为经过色彩矫正处理前的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,T表示转置。对应于上述格式,用于色彩矫正的矩阵也会有不同的格式,以带有二次项的色彩矫正矩阵为例,对待处理图像进行色彩矫正处理可以按照以下公式处理,此时用于色彩矫正的矩阵M可以是3*10的矩阵:
Figure PCTCN2019083693-appb-000006
其中R'、G'、B'分别为经过色彩矫正处理后的图像的色彩通道R的数值、色彩通道G的数值和通道B的数值,R、G、B分别为经过色彩矫正处理前的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
可以理解地,上述至少一种全局色彩处理也可仅对应于一个矩阵,本申请实施例不作具体限定。
第二类参数用于对图像进行局部色彩处理,其中局部色彩处理包括至少一种局部色彩处理。局部色彩处理是对图像的局部或者部分进行处理。例如,第二类参数用于对图像的局部进行色彩增强或色彩渲染处理。同样,第二类参数与至少一种局部色彩处理的关系可以是一一对应、一对多或者多对一,本申请实施例不作具体限定。当上述至少一种局部色 彩处理对应于多个参数时,本申请实施例对所述至少一种局部色彩处理的执行顺序不做限定。可选地,第二类参数可以滤波函数参数、色彩调整系数等。
本申请实施例对执行全局色彩处理和局部色彩处理的先后顺序不作具体限定。以先进行全局色彩处理,第一类参数为矩阵形式为例,将所述待处理图像与所述第一类参数进行矩阵乘法,以得到第一图像;根据所述第二类参数,对所述第一图像进行局部色彩处理,以得到第二图像。
根据第二类参数对所述待处理图像进行局部色彩处理的方式有很多,本申请实施例不作具体限定。作为一个示例,计算所述第一图像的色彩通道的数值和亮度通道的数值的差值,根据所述第二参数调整所述差值,将调整后的所述差值加到所述第一图像的亮度通道的数值上,以得第二图像的色彩通道的数值。
根据所述第二参数调整所述差值的方式有很多,本申请实施例不作具体限定。例如,可以将第二类参数与所述差值相乘,可以将第二类参数和差值输入到预配置的函数,可以将第二类参数与所述差值相加等。
本申请实施例对第一图像的图像格式不作具体限定,例如,可以是彩色RGB格式、YUV格式等。以第一图像为彩色RGB格式为例,所述第二类参数可以包括色彩通道R的色彩处理系数、色彩通道G的色彩处理系数、色彩通道B的色彩处理系数,可以根据以下公式对所述第一图像进行局部色彩调整:
Figure PCTCN2019083693-appb-000007
其中Y”为所述第一图像的亮度通道的数值,R”、G”、B”分别为第二图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R、G、B分别为第一图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,beta1、beta2、beta3分别为色彩通道R的色彩处理系数、色彩通道G的色彩处理系数、色彩通道B的色彩处理系数。
Y”一般通过Y”=a*R”+b*G”+c*B”得到,R”、G”、B”分别为经过局部色彩处理之前的图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
在一些实施例中,在根据第一类参数和第二类参数对待处理图像进行色彩处理之前,还可以对待处理图像进行一些基本处理,例如去噪、去马赛克等。
图3是本申请实施例的图像处理方法的具体的例子。应理解,图3仅为示例性的,仅仅是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体场景。如图3所示,原始图像(raw image)经过去马赛克和去噪等处理获得线性RGB图像,同时原始图像会进入预训练的神经网络模型,经过该神经网络模型处理,获得一个全局颜色矫正矩阵M和一个局部色彩处理系数beta,其中beta可以分为R、G、B 3个通道,并且大小和原始图像一样大,线性RGB图像经过全局颜色矩阵M处理,获取R”G”B”图像,R”G”B”图像是全局颜色矫正(例如,自动白平衡和/或色彩矫正)后的结果,然后在经过局部色彩处理,获取局部色彩处理(例如,色彩渲染和/或色彩增强)后的R”'G”'B”'图像。
上述图像色彩处理方法同时关注全局色彩处理和局部色彩处理,使用同一个神经网络模型完成从传感器的raw image到最终图像的全部色彩处理。由于各色彩处理所使用的调 整参数均使用raw image得到,首先可以避免误差累积的问题,其次由于raw image做大程度保留了图像的信息,得到的调整参数更加准确。
图4至图6给出了本申请实施例的神经网络模型的三种结构形式,应理解,图4至图6仅为示例性的,仅仅是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体场景。本申请实施例的神经网络模型还可以是其他结构形式,只要可以是实现本申请实施例的方法即可。
图4中的神经网络模型400包括最大池化(max pooling)401、卷积(convolution)402、反卷积(deconvolution)403、连接(connect)404、全局池化(global pooling)405、全连接(full connect)406、重塑(reshape)407等处理。
卷积402在图像处理中的作用相当于一个从输入图像矩阵中提取特定信息的过滤器。卷积层包括多个卷积算子,卷积算子也称为核,卷积算子本质上可以是一个权重矩阵,这个权重矩阵通常被预先定义,在对图像进行卷积操作的过程中,权重矩阵通常在输入图像上沿着水平方向一个像素接着一个像素(或两个像素接着两个像素,或三个像素接着三个像素等以此类推,间隔像素的数量取决于步长stride的取值)的进行处理,从而完成从图像中提取特定特征的工作。反卷积403也被称为转置卷积,是卷积402的逆过程。
卷积402之后常常需要周期性的引入池化处理,可以是一层卷积处理后面跟一层池化处理,也可以是多层卷积处理后面接一层或多层池化处理。在图像处理过程中,池化层的唯一目的就是减少图像的空间大小。池化处理可以包括使用平均池化算子的平均池化和/或使用最大池化算子的最大池化401,以用于对输入图像进行采样得到较小尺寸的图像。平均池化算子可以在特定范围内对图像中的像素值进行计算产生平均值作为平均池化的结果。最大池化算子可以在特定范围内取该范围内值最大的像素作为最大池化的结果。本申请实施例采用最大池化401。
神经网络模型400的第一分支由4层大小为M*N的图像数据开始,经过卷积402处理、最大池化401处理得到512层M/16*N/16的图像数据(即中间特征层数据),进一步经过全局池化405处理、全连接406处理以及重塑407等处理,最终得到第一类参数。第二分支由4层大小为M*N的图像数据开始,经过卷积402处理、最大池化401处理得到512层M/16*N/16的图像数据(即中间特征层数据),进一步经过反卷积403处理、卷积402处理以及连接404处理,最终得到第二类参数。这样,第一分支和第二分支的输入均为4层大小为M*N的图像数据。
可选地,神经网络模型400的第一分支和第二分支可以复用由4层大小为M*N的图像数据开始,经过卷积402处理、最大池化401处理得到512层M/16*N/16的图像数据(即中间特征层数据)的部分。
图5中的神经网络模型500包括最大池化(max pooling)501、卷积(convolution)502、平铺(tiling)503、连接(connect)504、全局池化(global pooling)505、全连接(full connect)506、重塑(reshape)507等处理。各处理的功能可参考图4中的相关描述,在此不再赘述。
神经网络模型500的第一分支由4层大小为M*N的图像数据开始,经过卷积502处理、最大池化501处理以及全局池化505处理得到512层1*1的图像数据(即中间特征层数据),进一步经过全连接506处理、重塑507等处理,最终得到第一类参数。第二分支 由4层大小为M*N的图像数据开始,经过卷积502处理、最大池化501处理以及全局池化505处理得到512层1*1的图像数据(即中间特征层数据),进一步经过tiling503处理得到512层M*N的图像数据,同时第二分支还对4层M*N的图像数据进行不改变图像大小的卷积502处理得到512层M*N的图像数据,将两部分512层M*N的图像数据经过连接504得到1024层M*N的图像数据,进一步卷积402处理,最终得到第二类参数。这样,第一分支和第二分支的输入均为4层大小为M*N的图像数据。
可选地,神经网络模型500的第一分支和第二分支可以复用由4层大小为M*N的图像数据开始,经过卷积502处理、最大池化501处理以及全局池化505处理得到512层1*1的图像数据(即中间特征层数据)的部分。
图6中的神经网络模型600包括最大池化(max pooling)601、卷积(convolution)602、全局池化(global pooling)605、全连接(full connect)606、重塑(reshape)607等处理。各处理的功能可参考图4中的相关描述,在此不再赘述。
神经网络模型600的第一分支由4层大小为M*N的图像数据开始,经过卷积602处理得到32层M*N的图像数据(即中间特征层数据)、进一步经过卷积602、最大池化601处理、全局池化605处理、全连接606处理、重塑607等处理,最终得到第一类参数。第二分支由4层大小为M*N的图像数据开始,经过卷积602处理得到32层M*N的图像数据(即中间特征层数据),进一步经过卷积602处理得到第二类参数。这样,第一分支和第二分支的输入均为4层大小为M*N的图像数据。
可选地,神经网络模型600的第一分支和第二分支可以复用由4层大小为M*N的图像数据开始,经过卷积602处理得到32层M*N的图像数据(即中间特征层数据)的部分。
下面结合图7至图9对本申请的装置或设备实施例进行描述。
图7是本申请实施例提供的图像处理装置的示意性结构图。如图7所示,装置700包括获取模块710和处理模块720。
所述获取模块710,用于获取待处理图像。
所述处理模块720,用于通过预训练的神经网络模型的第一分支对所述待处理图像进行处理,以得到第一类参数,所述第一类参数用于对图像进行全局色彩处理。
所述处理模块720,还用于通过所述神经网络模型的第二分支对所述待处理图像进行处理,以得到第二类参数,所述第二类参数用于对图像进行局部色彩处理。
所述处理模块720,还用于根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理,以获得色彩处理后的图像。
可选地,所述待处理图像为raw图像。
可选地,所述全局色彩处理包括自动白平衡和/或色彩校正,所述局部颜色处理包括色彩渲染和/或色彩增强。
可选地,所述第一分支和所述第二分支共享所述神经网络模型的共享参数层。
可以理解地,在获得第一类参数和第二类参数都需要得到中间特征层数据的情况下,所述第一分支得到所述中间特征层数据的层和所述第二分支的得到所述中间特征层数据的层可以共享结构参数。例如,通过所述预训练的神经网络模型的共享参数层,对所述待处理图像进行处理,得到所述中间特征层数据;通过所述预训练的神经网络模型的第一分支(不包含共享参数层部分)对所述中间特征层数据进行处理,得到所述第一类参数;通 过所述神经网络模型的第二分支(不包含共享参数层部分)对所述中间特征层数据进行处理,得到所述第二类参数。
可选地,所述第一类参数为矩阵形式;所述处理模块720,具体用于将所述待处理图像与所述第一类参数进行矩阵乘法,以得到第一图像;根据所述第二类参数,对所述第一图像进行局部色彩处理,以得到第二图像。
可选地,所述处理模块720,具体用于计算所述第一图像的色彩通道的数值和亮度通道的数值的差值;根据所述第二类参数调整所述差值;将调整后的所述差值加到所述第一图像的亮度通道的数值上。
可选地,所述第一图像的图像格式为彩色RGB格式,所述第二类参数包括色彩通道R的色彩处理系数beta1、色彩通道G的色彩处理系数beta2、色彩通道B的色彩处理系数beta3;所述处理模块720,具体用于根据公式
Figure PCTCN2019083693-appb-000008
对所述第一图像进行局部色彩调整,其中Y”为所述第一图像的亮度通道的数值,R”'、G”'、B”'分别为所述第二图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R”、G”、B”分别为所述第一图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
获取模块710可以由收发器或处理器实现。处理模块720可以由处理器实现。获取模块710和处理模块720的具体功能和有益效果可以参见图2所示的方法,在此就不再赘述。
图8是本申请另一实施例提供的图像处理装置的示意性结构图。如图8所示,装置800可以包括处理器820、存储器830。
图8中仅示出了一个存储器和处理器。在实际的图像处理装置产品中,可以存在一个或多个处理器和一个或多个存储器。存储器也可以称为存储介质或者存储设备等。存储器可以是独立于处理器设置,也可以是与处理器集成在一起,本申请实施例对此不做限制。
处理器820、存储器830之间通过内部连接通路互相通信,传递控制和/或数据信号。
具体地,处理器820用于获取待处理图像;用于通过预训练的神经网络模型的第一分支对所述待处理图像进行处理,以得到第一类参数,所述第一类参数用于对图像进行全局色彩处理;还用于通过所述神经网络模型的第二分支对所述待处理图像进行处理,以得到第二类参数,所述第二类参数用于对图像进行局部色彩处理;还用于根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理,以获得色彩处理后的图像。
本申请各实施例所述的存储器830用于存储处理器运行所需的计算机指令和参数。
装置800的具体工作过程和有益效果可以参见图2所示实施例中的描述,在此不再赘述。
本申请实施例还提供一种电子设备,该电子设备可以是终端设备。该设备可以用于执行上述方法实施例中的功能/步骤。
图9是本申请实施例提供的电子设备的示意性结构图。如图9所示,电子设备900包括处理器910和收发器920。可选地,该电子设备900还可以包括存储器930。其中,处理器910、收发器920和存储器930之间可以通过内部连接通路互相通信,传递控制和/或数据信号,该存储器930用于存储计算机程序,该处理器910用于从该存储器930中调用并运行该计算机程序。
可选地,电子设备900还可以包括天线940,用于将收发器920输出的无线信号发送出去。
上述处理器910可以和存储器930可以合成一个处理装置,更常见的是彼此独立的部件,处理器910用于执行存储器930中存储的程序代码来实现上述功能。具体实现时,该存储器930也可以集成在处理器910中,或者,独立于处理器910。该处理器910可以与图8中装置800中的处理模块820对应。
除此之外,为了使得电子设备900的功能更加完善,该电子设备900还可以包括输入单元960、显示单元970、音频电路980、摄像头990和传感器901等中的一个或多个,所述音频电路还可以包括扬声器982、麦克风984等。其中,显示单元970可以包括显示屏。
可选地,上述电子设备900还可以包括电源950,用于给终端设备中的各种器件或电路提供电源。
应理解,图9所示的电子设备900能够实现图2至图6所示方法实施例的各个过程。电子设备900中的各个模块的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见上述方法实施例中的描述,为避免重复,此处适当省略详细描述。
本申请各实施例所述的处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。本申请各实施例所述的处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的指令,结合其硬件完成上述方法的步骤。
在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是 计算机能够存取的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (17)

  1. 一种图像处理方法,其特征在于,包括:
    获取待处理图像;
    通过预训练的神经网络模型的第一分支对所述待处理图像进行处理,以得到第一类参数,所述第一类参数用于对图像进行全局色彩处理;
    通过所述神经网络模型的第二分支对所述待处理图像进行处理,以得到第二类参数,所述第二类参数用于对图像进行局部色彩处理;
    根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理,以获得色彩处理后的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述待处理图像为raw图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述全局色彩处理包括自动白平衡和/或色彩校正,所述局部颜色处理包括色彩渲染和/或色彩增强。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述第一分支和所述第二分支共享所述神经网络模型的共享参数层。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述第一类参数为矩阵形式;
    所述根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理,包括:
    将所述待处理图像与所述第一类参数进行矩阵乘法,以得到第一图像;
    根据所述第二类参数,对所述第一图像进行局部色彩处理,以得到第二图像。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述第二类参数,对所述第一图像进行局部色彩处理,包括:
    计算所述第一图像的色彩通道的数值和亮度通道的数值的差值;
    根据所述第二类参数调整所述差值;
    将调整后的所述差值加到所述第一图像的亮度通道的数值上。
  7. 根据权利要求6所述的方法,其特征在于,所述第一图像的图像格式为彩色RGB格式,所述第二类参数包括色彩通道R的色彩处理系数、色彩通道G的色彩处理系数、色彩通道B的色彩处理系数;
    所述根据所述第二类参数,对所述第一图像进行局部色彩处理,包括:
    根据公式
    Figure PCTCN2019083693-appb-100001
    对所述第一图像进行局部色彩调整,
    其中Y”为所述第一图像的亮度通道的数值,R”'、G”'、B”'分别为所述第二图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R”、G”、B”分别为所述第一图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
  8. 一种图像处理装置,其特征在于,包括:
    获取模块,用于获取待处理图像;
    处理模块,用于通过预训练的神经网络模型的第一分支对所述待处理图像进行处理, 以得到第一类参数,所述第一类参数用于对图像进行全局色彩处理;
    所述处理模块,还用于通过所述神经网络模型的第二分支对所述待处理图像进行处理,以得到第二类参数,所述第二类参数用于对图像进行局部色彩处理;
    所述处理模块,还用于根据所述第一类参数和第二类参数对所述待处理图像进行色彩处理。
  9. 根据权利要求8所述的装置,其特征在于,所述待处理图像为原始raw图像。
  10. 根据权利要求8或9所述的装置,其特征在于,所述全局色彩处理包括自动白平衡和/或色彩校正,所述局部颜色处理包括色彩渲染和/或色彩增强。
  11. 根据权利要求8至10中任一项所述的装置,其特征在于,所述第一分支和所述第二分支共享所述神经网络模型的共享参数层。
  12. 根据权利要求8至11中任一项所述的装置,其特征在于,所述第一类参数为矩阵形式;
    所述处理模块,具体用于将所述待处理图像与所述第一类参数进行矩阵乘法,以得到第一图像;根据所述第二类参数,对所述第一图像进行局部色彩处理,以得到第二图像。
  13. 根据权利要求12所述的装置,其特征在于,
    所述处理模块,具体用于计算所述第一图像的色彩通道的数值和亮度通道的数值的差值;根据所述第二类参数调整所述差值;将调整后的所述差值加到所述第一图像的亮度通道的数值上。
  14. 根据权利要求13所述的装置,其特征在于,所述第一图像的图像格式为彩色RGB格式,所述第二类参数包括色彩通道R的色彩处理系数beta1、色彩通道G的色彩处理系数beta2、色彩通道B的色彩处理系数beta3;
    所述处理模块,具体用于根据公式
    Figure PCTCN2019083693-appb-100002
    对所述第一图像进行局部色彩调整,其中Y”为所述第一图像的亮度通道的数值,R”'、G”'、B”'分别为所述第二图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值,R”、G”、B”分别为所述第一图像的色彩通道R的数值、色彩通道G的数值和色彩通道B的数值。
  15. 一种芯片,其特征在于,所述芯片与存储器相连,用于读取并执行所述存储器中存储的软件程序,以实现如权利要求1至7中任一项所述的方法。
  16. 一种电子设备,其特征在于,包括处理器和存储器,用于执行如权利要求1至7中任一项所述的方法。
  17. 一种计算机可读存储介质,其特征在于,包括指令,当其在电子设备上运行时,使得电子设备执行如权利要求1至7中任一项所述的方法。
PCT/CN2019/083693 2019-04-22 2019-04-22 图像处理方法、装置和电子设备 WO2020215180A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/083693 WO2020215180A1 (zh) 2019-04-22 2019-04-22 图像处理方法、装置和电子设备
CN201980079484.4A CN113168673A (zh) 2019-04-22 2019-04-22 图像处理方法、装置和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/083693 WO2020215180A1 (zh) 2019-04-22 2019-04-22 图像处理方法、装置和电子设备

Publications (1)

Publication Number Publication Date
WO2020215180A1 true WO2020215180A1 (zh) 2020-10-29

Family

ID=72940548

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/083693 WO2020215180A1 (zh) 2019-04-22 2019-04-22 图像处理方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN113168673A (zh)
WO (1) WO2020215180A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658043A (zh) * 2021-07-28 2021-11-16 上海智砹芯半导体科技有限公司 图像处理方法、装置、电子设备和可读存储介质
WO2022194345A1 (en) * 2021-03-16 2022-09-22 Huawei Technologies Co., Ltd. Modular and learnable image signal processor

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116917934A (zh) * 2021-08-31 2023-10-20 华为技术有限公司 图像处理方法、装置和车辆
CN115190226B (zh) * 2022-05-31 2024-04-16 华为技术有限公司 参数调整的方法、训练神经网络模型的方法及相关装置
CN116721038A (zh) * 2023-08-07 2023-09-08 荣耀终端有限公司 颜色修正方法、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063685A1 (en) * 2013-08-30 2015-03-05 National Central University Image distortion correction method and image distortion correction device using the same
CN106412547A (zh) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 一种基于卷积神经网络的图像白平衡方法、装置和计算设备
CN106934426A (zh) * 2015-12-29 2017-07-07 三星电子株式会社 基于图像信号处理的神经网络的方法和设备
CN107145902A (zh) * 2017-04-27 2017-09-08 厦门美图之家科技有限公司 一种基于卷积神经网络的图像处理方法、装置及移动终端
CN107578390A (zh) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 一种使用神经网络进行图像白平衡校正的方法及装置
CN108364267A (zh) * 2018-02-13 2018-08-03 北京旷视科技有限公司 图像处理方法、装置及设备
US20190045163A1 (en) * 2018-10-02 2019-02-07 Intel Corporation Method and system of deep learning-based automatic white balancing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004048740A (ja) * 2002-06-25 2004-02-12 Texas Instruments Inc ニューラル・ネットワーク・マッピングによる輝度得点自動露出を通しての自動白色バランシング
US8660355B2 (en) * 2010-03-19 2014-02-25 Digimarc Corporation Methods and systems for determining image processing operations relevant to particular imagery
JP6538176B2 (ja) * 2015-03-18 2019-07-03 ホアウェイ・テクノロジーズ・カンパニー・リミテッド カラーバランスをとるための画像処理装置および方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150063685A1 (en) * 2013-08-30 2015-03-05 National Central University Image distortion correction method and image distortion correction device using the same
CN106934426A (zh) * 2015-12-29 2017-07-07 三星电子株式会社 基于图像信号处理的神经网络的方法和设备
CN106412547A (zh) * 2016-08-29 2017-02-15 厦门美图之家科技有限公司 一种基于卷积神经网络的图像白平衡方法、装置和计算设备
CN107145902A (zh) * 2017-04-27 2017-09-08 厦门美图之家科技有限公司 一种基于卷积神经网络的图像处理方法、装置及移动终端
CN107578390A (zh) * 2017-09-14 2018-01-12 长沙全度影像科技有限公司 一种使用神经网络进行图像白平衡校正的方法及装置
CN108364267A (zh) * 2018-02-13 2018-08-03 北京旷视科技有限公司 图像处理方法、装置及设备
US20190045163A1 (en) * 2018-10-02 2019-02-07 Intel Corporation Method and system of deep learning-based automatic white balancing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022194345A1 (en) * 2021-03-16 2022-09-22 Huawei Technologies Co., Ltd. Modular and learnable image signal processor
CN113658043A (zh) * 2021-07-28 2021-11-16 上海智砹芯半导体科技有限公司 图像处理方法、装置、电子设备和可读存储介质
WO2023005115A1 (zh) * 2021-07-28 2023-02-02 爱芯元智半导体(上海)有限公司 图像处理方法、图像处理装置、电子设备和可读存储介质

Also Published As

Publication number Publication date
CN113168673A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
WO2021051996A1 (zh) 一种图像处理的方法和装置
WO2020215180A1 (zh) 图像处理方法、装置和电子设备
US10916036B2 (en) Method and system of generating multi-exposure camera statistics for image processing
WO2021057474A1 (zh) 主体对焦方法、装置、电子设备和存储介质
EP3308534A1 (en) Color filter array scaler
US20110205389A1 (en) Methods and Systems for Automatic White Balance
US20140078247A1 (en) Image adjuster and image adjusting method and program
WO2023010754A1 (zh) 一种图像处理方法、装置、终端设备及存储介质
US10600170B2 (en) Method and device for producing a digital image
WO2020011112A1 (zh) 图像处理方法、***、可读存储介质及终端
US9389678B2 (en) Virtual image signal processor
WO2019104047A1 (en) Global tone mapping
US20150365612A1 (en) Image capture apparatus and image compensating method thereof
WO2024027287A9 (zh) 图像处理***及方法、计算机可读介质和电子设备
WO2023010750A1 (zh) 一种图像颜色映射方法、装置、终端设备及存储介质
US20140168452A1 (en) Photographing apparatus, method of controlling the same, and non-transitory computer-readable storage medium for executing the method
CN104469191A (zh) 图像降噪的方法及其装置
US20240129446A1 (en) White Balance Processing Method and Electronic Device
US9654756B1 (en) Method and apparatus for interpolating pixel colors from color and panchromatic channels to color channels
CN110807735A (zh) 图像处理方法、装置、终端设备及计算机可读存储介质
CN115802183B (zh) 图像处理方法及其相关设备
WO2023010751A1 (zh) 图像高亮区域的信息补偿方法、装置、设备及存储介质
CN114331916A (zh) 图像处理方法及电子设备
CN110572585A (zh) 图像处理方法、装置、存储介质及电子设备
WO2021179142A1 (zh) 一种图像处理方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19926604

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19926604

Country of ref document: EP

Kind code of ref document: A1