CN111223061A - Image correction method, correction device, terminal device and readable storage medium - Google Patents

Image correction method, correction device, terminal device and readable storage medium Download PDF

Info

Publication number
CN111223061A
CN111223061A CN202010013217.1A CN202010013217A CN111223061A CN 111223061 A CN111223061 A CN 111223061A CN 202010013217 A CN202010013217 A CN 202010013217A CN 111223061 A CN111223061 A CN 111223061A
Authority
CN
China
Prior art keywords
image
hdr
hdr image
feature map
cnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010013217.1A
Other languages
Chinese (zh)
Inventor
何慕威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010013217.1A priority Critical patent/CN111223061A/en
Publication of CN111223061A publication Critical patent/CN111223061A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image correction method, a correction device, a terminal device and a readable storage medium. The method comprises the following steps: obtaining a High Dynamic Range (HDR) image containing artifacts, and a multi-frame Low Dynamic Range (LDR) image for synthesizing the HDR image; selecting one frame of image from the multi-frame LDR image as a reference image; inputting the HDR image and the reference image into a trained first Convolutional Neural Network (CNN) to obtain a feature map output by the first CNN, wherein the feature map is used for removing artifacts in the HDR image; and removing the artifact in the HDR image according to the feature map to obtain a modified HDR image. According to the method and the device, when the HDR image with the artifact is acquired, the artifact in the HDR image can be eliminated to a certain extent.

Description

Image correction method, correction device, terminal device and readable storage medium
Technical Field
The present application belongs to the technical field of terminal devices, and in particular, to an image correction method, an image correction apparatus, a terminal device, and a computer-readable storage medium.
Background
At present, in order to obtain a High Dynamic Range (HDR) image, it is necessary to obtain Low Dynamic Range (LDR) images of multiple frames of different exposure durations in a time sequence, and then fuse the LDR images of the multiple frames to obtain the HDR image.
In the conventional HDR image acquisition method, since multiple frames of different LDR images need to be acquired in time sequence, when a subject is rapidly moving, an artifact may occur in the resultant HDR image.
Therefore, how to remove the artifacts in the HDR image is a technical problem to be solved urgently at present.
Disclosure of Invention
In view of this, embodiments of the present application provide an image correction method, an image correction apparatus, a terminal device, and a computer-readable storage medium. When an HDR image with artifacts is acquired, the artifacts in the HDR image can be eliminated to some extent.
A first aspect of an embodiment of the present application provides an image correction method, including:
acquiring a High Dynamic Range (HDR) image containing an artifact and a multi-frame Low Dynamic Range (LDR) image for synthesizing the HDR image;
selecting a frame of image from the multi-frame LDR image as a reference image;
inputting the HDR image and the reference image into a trained first Convolutional Neural Network (CNN) to obtain a feature map output by the first CNN, wherein the feature map is used for removing artifacts in the HDR image;
and removing the artifact in the HDR image according to the characteristic diagram to obtain a modified HDR image.
A second aspect of an embodiment of the present application provides an image correction apparatus, including:
the image acquisition module is used for acquiring a High Dynamic Range (HDR) image containing an artifact and a multi-frame Low Dynamic Range (LDR) image for synthesizing the HDR image;
a selecting module, configured to select one frame of image from the multiple frames of LDR images as a reference image;
a feature obtaining module, configured to input the HDR image and the reference image into a trained first convolutional neural network CNN to obtain a feature map output by the first CNN, where the feature map is used to remove artifacts in the HDR image;
and the artifact removing module is used for removing the artifact in the HDR image according to the characteristic map to obtain a modified HDR image.
A third aspect of embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the image correction method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program that, when executed by a processor, implements the steps of the image correction method according to the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product comprising a computer program that, when executed by one or more processors, performs the steps of the image correction method as described in the first aspect above.
In view of the above, the present application provides an image correction method. After the HDR image containing the artifact is obtained, firstly, selecting an LDR image from each LDR image as a reference image; secondly, inputting the HDR image and the LDR image into a first CNN to obtain a feature map for removing the HDR image; finally, the characteristic map is used for removing artifacts in the HDR image.
Since the HDR image includes artifact information and the reference image does not include artifact information, information required to correct an artifact in the HDR image to a normal image can be determined based on both the HDR image and the reference image, and since the convolutional neural network model has a strong image information learning ability, the applicant has considered that a feature map required to remove an artifact is determined based on the convolutional neural network model in order to acquire a corrected HDR image with a good effect.
Based on the above analysis, it can be seen that the technical solution defined in the present application can eliminate the artifact in the HDR image to a certain extent when the obtained HDR image has the artifact.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
Fig. 1 is a schematic flowchart of an image correction method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another image correction method according to the second embodiment of the present application;
fig. 3 is a schematic diagram of a process of training a first CNN according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of an image correction apparatus according to a third embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application;
Detailed Description
In the following description, for purposes of explanation and not limitation, specific technical details are set forth, such as particular examples, in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The method provided by the embodiment of the present application may be applied to a terminal device, and for example, the terminal device includes but is not limited to: smart phones, tablet computers, notebooks, desktop computers, cloud servers, and the like.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Example one
The following describes an image correction method provided in an embodiment of the present application, where the image correction method is applied to a terminal device (for example, a smart phone), with reference to fig. 1, the method includes the following steps:
in step S101, a high dynamic range HDR image containing an artifact and a multi-frame low dynamic range LDR image for synthesizing the HDR image are acquired;
in the embodiment of the present application, the terminal device may directly let the user input the HDR image containing the artifact and the LDR image used to synthesize the HDR image.
Alternatively, the HDR image may be automatically generated after each LDR image is acquired, that is, the step S101 may include the following steps:
step S1011, acquiring a plurality of frames of LDR images, and judging whether the moving amplitude of the shooting main body in the plurality of frames of LDR images is larger than a preset amplitude or not according to the plurality of frames of LDR images;
and step S1012, if the amplitude is greater than the preset amplitude, determining the HDR image containing the artifact from the image obtained by image fusion of the multi-frame LDR images.
That is, after acquiring a plurality of frames of LDR images, it is necessary to determine whether the movement range of the shooting subject is large (for example, the movement range may be detected by using an optical flow method in the prior art), and if the movement range is large, it may be indicated that the shooting subject has a fast motion, and then the HDR image obtained after image fusion may include an artifact, so that the image obtained after image fusion of each frame of LDR image is directly determined as the HDR image including the artifact.
The person skilled in the art can easily understand that when the terminal device detects that there is no substantial movement of the shooting subject, the image obtained by fusing the multi-frame LDR images can be directly output and stored, because there is no artifact in the HDR image obtained at this time, it is not necessary to eliminate the artifact by using the technical solution of the present application.
In step S102, selecting a frame from the multiple frames of LDR images as a reference image;
in this step, one frame may be arbitrarily selected from the multi-frame LDR images as a reference image, but since a subsequent step needs to determine a feature map required for correcting the artifact to a normal image based on the reference image, the reference image may be selected as one frame image with better image quality of the subject and its peripheral area in consideration of correcting the image quality of the HDR image, and generally speaking, the reference image may be selected as one frame image with better image quality of the subject and its peripheral area in the multi-frame LDR images in a normal exposure duration.
For example, when an existing smartphone captures an LDR image, 3 frames of HDR images are often captured, the first captured image is a short exposure image, the last captured image is a long exposure image, and the middle frame is an image with a normal exposure duration, and then the middle frame may be selected as the reference image.
In step S103, inputting the HDR image and the reference image into a trained first convolutional neural network CNN to obtain a feature map output by the first CNN, where the feature map is used to remove artifacts in the HDR image;
in this step S103, the above feature map is inevitably removed from the HDR image by a preset algorithm. The present embodiment does not specifically limit the "preset algorithm", and in the second embodiment of the present invention, the preset algorithm is specifically a point multiplication operation (the specific definition of the "point multiplication operation" may refer to the second embodiment), but it should be understood by those skilled in the art that the preset algorithm may also be other operations before the point multiplication operation, and after the preset algorithm type is determined, the first CNN (Convolutional Neural Network) model may be trained, so that the feature map output by the first CNN model can eliminate the artifact in the image based on the preset algorithm, and the specific training manner for the first CNN model may also refer to the second embodiment.
Since the first CNN needs to learn the image information of the artifact region in an important manner, the first CNN may be a convolutional neural network based on an attention mechanism, but other types of CNNs may also achieve technical effects, and the specific selection of the first CNN is not limited in this application.
In step S104, removing artifacts in the HDR image according to the feature map to obtain a modified HDR image;
since the feature map in step S103 is eliminated based on the preset algorithm, after the feature map is obtained, the HDR image is processed based on the preset algorithm, so as to obtain the HDR image with the artifact eliminated.
In addition, due to the limitation of the number and quality of sample images, the first CNN alone may not completely eliminate the artifact in the HDR image, or the artifact is eliminated and at the same time, other non-artifact areas in the image are distorted, so that the modified HDR image obtained in step S104 is not good in effect, and therefore, in order to further obtain an image desired by the user, the modified HDR image may be input into one CNN (i.e., the second CNN) again after step S104, and the modified HDR image is modified again to obtain a final HDR image, thereby further improving user experience.
According to the technical scheme provided by the embodiment of the application, the feature map capable of eliminating the artifact is acquired based on the HDR image containing the artifact and the reference image not containing the artifact, so that the artifact in the HDR image can be eliminated to a certain extent when the acquired HDR image has the artifact.
Example two
An embodiment of the present application provides another image correction method, referring to fig. 2, the image correction method includes:
in step S201, a high dynamic range HDR image containing an artifact and a multi-frame low dynamic range LDR image for synthesizing the HDR image are acquired;
in step S202, a frame is selected from the multiple frames of LDR images as a reference image;
the specific implementation of the steps S201 to S202 is completely the same as the steps S101 to S102 in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated herein.
In step S203, inputting the HDR image and the reference image into a trained first convolutional neural network CNN, and obtaining a feature map output by the first CNN, where the number of channels of the feature map is the same as the number of channels of the HDR image, and the size of the feature map is the same as the size of the HDR image, and the feature map is specifically used for removing artifacts in the HDR image through a dot product operation;
unlike the first embodiment, the second embodiment of the present application specifically defines the "predetermined algorithm" as the "dot product operation" (see the following description for the definition of the dot product operation described in the second embodiment of the present application).
In the second embodiment of the present application, the number of channels and the size of the feature map output by the first CNN are respectively the same as those of the HDR image, for example, if the size of the HDR image is 1280 × 1024 and the number of channels is 3 (usually, R channel, G channel, and B channel, respectively), the feature map output by the first CNN should also be 3 channels and the size should also be 1280 × 1024.
The following discusses a specific definition of the dot product operation described in the present application:
calculating each channel of the modified HDR image after removing the artifact through the following formula (1), wherein the formula (1) is as follows:
Hi,j R=Ci,j*Hi,j(i=1…M;j=1…N) (1)
wherein, Ci,jThe pixel value at the position (i, j), H, of a certain channel, such as the R channel, of the feature map output for the trained first CNNi,jIs the pixel value of the HDR image at position (i, j) of the R channel, Hi,j RFor the pixel value of the modified HDR image at the position (i, j) of the R channel, the size of the HDR image is M × N, and the above formula (1) is applied to the other channels, information of the other channels of the modified HDR image is obtained, such as information of the modified HDR image at the G channel and the B channel. After knowing the information of the modified HDR image in each channel, the modified HDR image is obtained.
Based on the above specific definition of the dot product operation, the following discusses a method for training the trained first CNN according to the second embodiment of the present application, as shown in fig. 3:
step 1, as shown in fig. 3, acquiring n groups of sample images (n is greater than or equal to 2), wherein each group of sample images includes: the method comprises the steps of generating an HDR sample image with an artifact, generating a multi-frame LDR sample image of the HDR sample image with the artifact, removing the HDR sample image with the artifact, and modifying the HDR sample image with the artifact into a sample feature map required by the HDR sample image with the artifact being removed through the specific definition of the dot multiplication operation. Those skilled in the art will appreciate that the sample images between groups will contain different image content.
Step 2, as shown in fig. 3, for each group of sample images, selecting one frame of LDR sample image from the group of sample images as a reference sample image, inputting the reference sample image and the group of HDR sample images with artifacts into a first CNN, and comparing an output feature map output by the first CNN with sample feature maps in the group of sample images, and similarly, for other groups of sample images, performing the same operation;
step 3, determining a loss function of the first CNN based on the comparison result;
and 4, adjusting the parameters in the first CNN, and then returning to the step 2 until the value of the loss function of the first CNN meets the requirement.
The trained first CNN described in step S203 can be obtained based on steps 1 to 4. In addition, in step 2, the process of inputting the reference sample image and the HDR sample image with the artifact to the first CNN may be: for each set of sample images, mapping a reference sample image in the set of sample images from an LDR domain to an HDR domain (which may be mapped by gamma calibration or the like), and performing channel cascade on the mapped reference sample image and the HDR sample image with the artifact in the set of sample images, and inputting the reference sample image and the HDR sample image into the first CNN. In step 2, the output feature map may be obtained by performing normalization calculation on the initial feature map output by the first CNN (normalization calculation may be performed by a Sigmod function).
That is, the present application does not specifically limit the manner in which data is input to the first CNN and the manner in which the first CNN outputs the feature map in step 2. However, it is easily understood by those skilled in the art that if, when training the first CNN, the reference sample image is first mapped to the HDR domain, then the first CNN is input after channel cascade connection with the HDR sample image with artifacts is performed, and the initial feature map output by the first CNN is normalized to obtain the output feature map, then after the training is completed, when using the trained first CNN, the following steps are included (i.e., the step S203 includes): mapping the reference image in the step S202 from an LDR domain to an HDR domain to obtain a mapped reference image; performing channel cascade on the HDR image obtained in step S201 and the mapped reference image, and inputting the HDR image and the mapped reference image into the trained first CNN to obtain an initial feature map output by the first CNN; and carrying out normalization processing on the initial characteristic diagram to obtain the characteristic diagram for removing the artifact through the dot multiplication operation.
In step S204, for each channel of the feature map, performing a dot multiplication operation on the channel and a channel corresponding to the HDR image based on the dot multiplication operation, and combining channels of the modified HDR image obtained based on the dot multiplication operation to obtain a modified HDR image from which the HDR image artifact is removed;
based on the definition of the dot product operation in step S203, the modified HDR image with the artifact removed can be obtained, and will not be described herein again.
As will be readily understood by those skilled in the art, based on the above specific definition of the dot product operation, in the feature map of the trained first CNN output, the pixel value of the area without artifacts should be 1, and the pixel value of the area with artifacts should be not 1, so as to implement the modification of the HDR image with artifacts. However, due to the limitation of the number and quality of sample images, the feature map output by the first CNN obtained by training may not guarantee that the pixel values of the non-artifact regions are all 1, and may not guarantee that the artifact regions can be corrected very accurately, so the effect of the corrected HDR image obtained in step S204 may not be good, and therefore, in order to further obtain an image desired by the user, the corrected HDR image may be corrected again after step S204, and a final HDR image may be obtained by another post-training second CNN, so that the user experience may be further improved, and a HDR image with good effect may be obtained.
Compared with the first embodiment, the second embodiment provides a more specific way for eliminating the artifact, and the second embodiment is the same as the first embodiment, and can eliminate the artifact in the HDR image to a certain extent when the obtained HDR image has the artifact.
EXAMPLE III
The third embodiment of the application provides an image correction device. For convenience of explanation, only the portions related to the present application are shown, and as shown in fig. 4, the image correction apparatus 400 includes:
an image obtaining module 401, configured to obtain a high dynamic range HDR image containing an artifact, and a multi-frame low dynamic range LDR image used for synthesizing the HDR image;
a selecting module 402, configured to select a frame of image from the multiple frames of LDR images as a reference image;
a feature obtaining module 403, configured to input the HDR image and the reference image into a trained first convolutional neural network CNN, to obtain a feature map output by the first CNN, where the feature map is used to remove artifacts in the HDR image;
and an artifact removing module 404, configured to remove an artifact in the HDR image according to the feature map to obtain a modified HDR image.
Optionally, the number of channels of the feature map is the same as the number of channels of the HDR image, and the size of the feature map is the same as the size of the HDR image, and the feature map is specifically used for removing artifacts in the HDR image through a dot product operation;
accordingly, the artifact removal module 404 includes:
a point multiplication unit configured to perform, for each channel of the feature map, a point multiplication operation on the channel and a corresponding channel of the HDR image based on a point multiplication operation, where a calculation formula of the point multiplication operation is:
Hi,j R=Ci,j*Hi,j(i=1…M;j=1…N)
wherein, Ci,jThe pixel value, H, of the channel at position (i, j) for the above feature mapi,jIs the pixel value, H, of the HDR image at location (i, j) of the channeli,j RFor the pixel value of the modified HDR image at the position (i, j) of the channel, the size of the HDR image is M × N;
and a synthesizing unit configured to synthesize each channel of the modified HDR image obtained based on the dot product operation, and obtain the modified HDR image from which the HDR image artifact is removed.
Optionally, the feature obtaining module 403 includes:
a mapping unit, configured to map the reference image from an LDR domain to an HDR domain to obtain a mapped reference image;
a cascade unit, configured to perform channel cascade on the HDR image and the mapped reference image, and input the HDR image and the mapped reference image into the trained first CNN to obtain an initial feature map output by the first CNN;
and a normalization unit, configured to perform normalization processing on the initial feature map to obtain the feature map for removing artifacts through the dot product operation.
Optionally, the first CNN is a convolutional neural network based on an attention mechanism.
Optionally, the image obtaining module 401 includes:
the LDR unit is used for acquiring a plurality of frames of LDR images and judging whether the moving amplitude of the shooting main body in the plurality of frames of LDR images is larger than the preset amplitude or not according to the plurality of frames of LDR images;
and the artifact HDR unit is used for determining an image obtained by image fusion of the multi-frame LDR image as the HDR image containing the artifact if the amplitude is larger than the preset amplitude.
Optionally, the image modification module 400 further includes:
and the re-correction module is used for inputting the corrected HDR image into the trained second CNN, so that the second CNN corrects the corrected HDR again to obtain a final HDR image.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, since the first method embodiment and the second method embodiment are based on the same concept, specific functions and technical effects thereof may be specifically referred to a corresponding method embodiment part, and details are not described herein again.
Example four
Fig. 5 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 5, the terminal device 500 of this embodiment includes: a processor 501, a memory 502 and a computer program 503 stored in the memory 502 and executable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503 described above. Alternatively, the processor 501 implements the functions of the modules/units in the device embodiments when executing the computer program 503.
Illustratively, the computer program 503 may be divided into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 503 in the terminal device 500. For example, the computer program 503 may be divided into an image acquisition module, a selection module, a feature acquisition module, and an artifact removal module, and each module has the following specific functions:
acquiring a High Dynamic Range (HDR) image containing an artifact and a multi-frame Low Dynamic Range (LDR) image for synthesizing the HDR image;
selecting a frame of image from the multi-frame LDR image as a reference image;
inputting the HDR image and the reference image into a trained first Convolutional Neural Network (CNN) to obtain a feature map output by the first CNN, wherein the feature map is used for removing artifacts in the HDR image;
and removing the artifact in the HDR image according to the characteristic diagram to obtain a modified HDR image.
The terminal device may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of a terminal device 500 and is not intended to limit the terminal device 500 and may include more or less components than those shown, or some components may be combined, or different components, for example, the terminal device may also include input and output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the terminal device 500, such as a hard disk or a memory of the terminal device 500. The memory 502 may also be an external storage device of the terminal device 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 500. Further, the memory 502 may include both an internal storage unit and an external storage device of the terminal device 500. The memory 502 is used for storing the computer program and other programs and data required by the terminal device. The memory 502 described above may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a processor, so as to implement the steps of the above method embodiments. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image correction method, comprising:
obtaining a High Dynamic Range (HDR) image containing artifacts, and a multi-frame Low Dynamic Range (LDR) image for synthesizing the HDR image;
selecting one frame of image from the multi-frame LDR image as a reference image;
inputting the HDR image and the reference image into a trained first Convolutional Neural Network (CNN) to obtain a feature map output by the first CNN, wherein the feature map is used for removing artifacts in the HDR image;
and removing the artifact in the HDR image according to the feature map to obtain a modified HDR image.
2. The image modification method of claim 1, wherein the number of channels of the feature map is the same as the number of channels of the HDR image, and the size of the feature map is the same as the size of the HDR image, the feature map being specifically configured to remove artifacts in the HDR image by a dot product operation;
correspondingly, the removing the artifact in the HDR image according to the feature map to obtain a modified HDR image includes:
for each channel of the feature map, performing a point multiplication operation on the channel and a corresponding channel of the HDR image based on the point multiplication operation, wherein the calculation formula of the point multiplication operation is as follows:
Hi,j R=Ci,j*Hi,j(i=1…M;j=1…N)
wherein, Ci,jIs the pixel value, H, of the channel of the feature map at position (i, j)i,jIs the pixel value, H, of the HDR image at location (i, j) of the corresponding channeli,j RFor the pixel value of the modified HDR image at the position (i, j) of the corresponding channel, the size of the HDR image is M N;
and synthesizing each channel of the corrected HDR image obtained based on the dot multiplication operation to obtain the corrected HDR image without HDR image artifacts.
3. The image modification method of claim 2, wherein the inputting the HDR image and the reference image into a trained first Convolutional Neural Network (CNN) to obtain a feature map output by the first CNN comprises:
mapping the reference image from an LDR domain to an HDR domain to obtain a mapped reference image;
performing channel cascade on the HDR image and the mapped reference image, and inputting the HDR image and the mapped reference image into the trained first CNN to obtain an initial feature map output by the first CNN;
and carrying out normalization processing on the initial characteristic diagram to obtain the characteristic diagram for removing the artifact through the dot multiplication operation.
4. The image correction method according to any one of claims 1 to 3, wherein the first CNN is a convolutional neural network based on an attention mechanism.
5. An image modification method as claimed in any one of claims 1 to 3, wherein said obtaining a high dynamic range, HDR, image containing artefacts, and a multi-frame, low dynamic range, LDR, image for synthesizing the HDR image comprises:
acquiring a plurality of frames of LDR images, and judging whether the moving amplitude of a shooting main body in the plurality of frames of LDR images is larger than a preset amplitude or not according to the plurality of frames of LDR images;
if the amplitude is larger than the preset amplitude, performing image fusion on the multi-frame LDR image to obtain an image, and determining the image as the HDR image containing the artifact.
6. The image modifying method as claimed in any one of claims 1 to 3, wherein after said step of removing artifacts from said HDR image according to said feature map to obtain a modified HDR image, further comprising:
and inputting the corrected HDR image into a trained second CNN, so that the second CNN corrects the corrected HDR again to obtain a final HDR image.
7. An image correction apparatus, characterized by comprising:
an image acquisition module for acquiring a High Dynamic Range (HDR) image containing artifacts and a multi-frame Low Dynamic Range (LDR) image for synthesizing the HDR image;
the selection module is used for selecting one frame of image from the multi-frame LDR image as a reference image;
the characteristic acquisition module is used for inputting the HDR image and the reference image into a trained first Convolutional Neural Network (CNN) to obtain a characteristic map output by the first CNN, wherein the characteristic map is used for removing artifacts in the HDR image;
and the artifact removing module is used for removing the artifact in the HDR image according to the feature map to obtain a modified HDR image.
8. The image modification apparatus of claim 7, wherein the number of channels of the feature map is the same as the number of channels of the HDR image, and the size of the feature map is the same as the size of the HDR image, the feature map being specifically configured to remove artifacts in the HDR image by a dot product operation;
accordingly, the artifact removal module comprises:
a point multiplication unit, configured to, for each channel of the feature map, perform a point multiplication operation on the channel and a corresponding channel of the HDR image based on a point multiplication operation, where a calculation formula of the point multiplication operation is:
Hi,j R=Ci,j*Hi,j(i=1…M;j=1…N)
wherein, Ci,jIs the pixel value, H, of the channel of the feature map at position (i, j)i,jIs the pixel value, H, of the HDR image at location (i, j) of the corresponding channeli,j RFor the pixel value of the modified HDR image at the position (i, j) of the corresponding channel, the size of the HDR image is M N;
and the synthesizing unit is used for synthesizing each channel of the corrected HDR image obtained based on the dot multiplication operation to obtain the corrected HDR image with the HDR image artifact removed.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the image correction method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the image correction method according to any one of claims 1 to 6.
CN202010013217.1A 2020-01-07 2020-01-07 Image correction method, correction device, terminal device and readable storage medium Pending CN111223061A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010013217.1A CN111223061A (en) 2020-01-07 2020-01-07 Image correction method, correction device, terminal device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010013217.1A CN111223061A (en) 2020-01-07 2020-01-07 Image correction method, correction device, terminal device and readable storage medium

Publications (1)

Publication Number Publication Date
CN111223061A true CN111223061A (en) 2020-06-02

Family

ID=70828113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010013217.1A Pending CN111223061A (en) 2020-01-07 2020-01-07 Image correction method, correction device, terminal device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111223061A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784598A (en) * 2020-06-18 2020-10-16 Oppo(重庆)智能科技有限公司 Method for training tone mapping model, tone mapping method and electronic equipment
CN111882498A (en) * 2020-07-10 2020-11-03 网易(杭州)网络有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111986106A (en) * 2020-07-30 2020-11-24 南京大学 High dynamic image reconstruction method based on neural network
CN112818732A (en) * 2020-08-11 2021-05-18 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN113538304A (en) * 2020-12-14 2021-10-22 腾讯科技(深圳)有限公司 Training method and device of image enhancement model, and image enhancement method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288217A1 (en) * 2010-01-27 2012-11-15 Jiefu Zhai High dynamic range (hdr) image synthesis with user input
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
CN108416754A (en) * 2018-03-19 2018-08-17 浙江大学 A kind of more exposure image fusion methods automatically removing ghost
WO2019001701A1 (en) * 2017-06-28 2019-01-03 Huawei Technologies Co., Ltd. Image processing apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288217A1 (en) * 2010-01-27 2012-11-15 Jiefu Zhai High dynamic range (hdr) image synthesis with user input
WO2019001701A1 (en) * 2017-06-28 2019-01-03 Huawei Technologies Co., Ltd. Image processing apparatus and method
CN107871332A (en) * 2017-11-09 2018-04-03 南京邮电大学 A kind of CT based on residual error study is sparse to rebuild artifact correction method and system
CN108416754A (en) * 2018-03-19 2018-08-17 浙江大学 A kind of more exposure image fusion methods automatically removing ghost

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QINGSEN YAN等: "Attention-Guided Network for Ghost-Free High Dynamic Range Imaging" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784598A (en) * 2020-06-18 2020-10-16 Oppo(重庆)智能科技有限公司 Method for training tone mapping model, tone mapping method and electronic equipment
CN111882498A (en) * 2020-07-10 2020-11-03 网易(杭州)网络有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111986106A (en) * 2020-07-30 2020-11-24 南京大学 High dynamic image reconstruction method based on neural network
CN111986106B (en) * 2020-07-30 2023-10-13 南京大学 High-dynamic image reconstruction method based on neural network
CN112818732A (en) * 2020-08-11 2021-05-18 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN112818732B (en) * 2020-08-11 2023-12-12 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN113538304A (en) * 2020-12-14 2021-10-22 腾讯科技(深圳)有限公司 Training method and device of image enhancement model, and image enhancement method and device
CN113538304B (en) * 2020-12-14 2023-08-18 腾讯科技(深圳)有限公司 Training method and device for image enhancement model, and image enhancement method and device

Similar Documents

Publication Publication Date Title
CN111223061A (en) Image correction method, correction device, terminal device and readable storage medium
CN109064428B (en) Image denoising processing method, terminal device and computer readable storage medium
CN110533607B (en) Image processing method and device based on deep learning and electronic equipment
CN108833784B (en) Self-adaptive composition method, mobile terminal and computer readable storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
US7660486B2 (en) Method and apparatus of removing opaque area as rescaling an image
CN109785246B (en) Noise reduction method, device and equipment for non-local mean filtering
CN101394460A (en) Image processing apparatus, image processing method, image processing program, and image capturing apparatus
CN108898549B (en) Picture processing method, picture processing device and terminal equipment
US11836898B2 (en) Method and apparatus for generating image, and electronic device
US20210248723A1 (en) Image brightness statistical method and imaging device
CN111145086A (en) Image processing method and device and electronic equipment
CN111899185A (en) Training method and device of image noise reduction model, electronic equipment and storage medium
CN114418873A (en) Dark light image noise reduction method and device
CN111754435B (en) Image processing method, device, terminal equipment and computer readable storage medium
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN111340722A (en) Image processing method, processing device, terminal device and readable storage medium
CN111489289B (en) Image processing method, image processing device and terminal equipment
CN111754412A (en) Method and device for constructing data pairs and terminal equipment
CN111275622A (en) Image splicing method and device and terminal equipment
Fry et al. Validation of modulation transfer functions and noise power spectra from natural scenes
US20230060988A1 (en) Image processing device and method
CN108629219B (en) Method and device for identifying one-dimensional code
CN109308690B (en) Image brightness balancing method and terminal
CN115760653A (en) Image correction method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination