WO2019072222A1 - Image processing method and device and apparatus - Google Patents

Image processing method and device and apparatus Download PDF

Info

Publication number
WO2019072222A1
WO2019072222A1 PCT/CN2018/109951 CN2018109951W WO2019072222A1 WO 2019072222 A1 WO2019072222 A1 WO 2019072222A1 CN 2018109951 W CN2018109951 W CN 2018109951W WO 2019072222 A1 WO2019072222 A1 WO 2019072222A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target image
camera
ghost
sensitivity
Prior art date
Application number
PCT/CN2018/109951
Other languages
French (fr)
Chinese (zh)
Inventor
王银廷
胡碧莹
张熙
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710959936.0A external-priority patent/CN109671106B/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18866515.2A priority Critical patent/EP3686845B1/en
Publication of WO2019072222A1 publication Critical patent/WO2019072222A1/en
Priority to US16/847,178 priority patent/US11445122B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • the present invention relates to the field of terminal technologies, and in particular, to an image processing method, apparatus, and device.
  • the present invention proposes a set of photographing methods for capturing motion scenes.
  • the embodiment of the invention provides an image processing method, device and device, which can provide a capture mechanism for a user, and can capture high-definition images when processing a motion scene, thereby improving the user's photographing experience.
  • an embodiment of the present invention provides an image processing method, including: obtaining an N-frame image; determining a reference image in the N-frame image, and resting the N-1 frame image as a to-be-processed image; Processing the image to obtain an N-1 frame de-ghost image; performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image; wherein, the N-1 frame is obtained according to the N-1 frame to be processed image
  • De-ghosting image includes: performing step 1 - step 4 for the ith frame image in the image to be processed of the N-1 frame, i taking all positive integers not greater than N-1,
  • Step 1 register the image of the i-th frame with the reference image to obtain an i-th registration image
  • step 2 obtain an i-th difference image according to the i-th registration image and the reference image
  • step 3 obtain the first image according to the i-th difference image i ghost ghost weight image
  • step 4 according to the i-th ghost weight image, the i-th registration image is merged with the reference image to obtain the i-th frame de-ghost image.
  • an embodiment of the present invention provides an image processing apparatus, where the apparatus includes: an acquiring module, configured to obtain an N-frame image; and a determining module, configured to determine a reference image in the N-frame image, and the remaining N-1 frame images are An image to be processed; a ghosting module, configured to perform the following steps 1 - 4 on the ith frame image in the image to be processed of the N-1 frame to obtain an N-1 frame de ghost image, where the i is not greater than N All positive integers of -1; Step 1: Register the image of the ith frame with the reference image to obtain the i-th registration image; Step 2: Obtain the ith difference image according to the i-th registration image and the reference image; Step 3: Obtaining an i-th ghost weight image according to the i-th difference image; step 4: fusing the i-th registration image and the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image; the mean operation module, The reference image and
  • the user can still capture the image in motion in the motion scene, and can obtain a high-definition picture.
  • the method before obtaining the N frame image, further comprises: when detecting that the following three situations exist simultaneously, generating a control signal, the control signal is used to indicate acquisition N frame image; Case 1: The framing image of the camera is detected as a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe duration; Case 3: The camera is detected to be in a very bright scene, correspondingly, the current camera The sensitivity is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  • the above three situations can detect at least one to generate a corresponding control signal.
  • the smart switch to the first capture mode mentioned below, and the control signal is generated to acquire the N frame picture in the first capture mode.
  • the method of detecting the above situation can be performed by the detection module.
  • obtaining the N-frame image includes: maintaining the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration and increasing the sensitivity according to a preset ratio, a first exposure duration and a first sensitivity; setting an exposure duration and a sensitivity of the camera to the first exposure duration and the first sensitivity, respectively, and capturing an N-frame image.
  • the method before obtaining the N frame image, further comprises: when detecting that the following three situations exist simultaneously, generating a control signal, the control signal is used to indicate acquisition N frame image; Case 1: The viewfinder image of the camera is detected as a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe duration; Case 3: The camera is detected to be in a moderately bright scene, correspondingly, the camera The current sensitivity is in a first preset threshold interval, and the current exposure duration is in a second predetermined threshold interval.
  • the above three situations can detect at least one to generate a corresponding control signal.
  • the smart switch to the second capture mode or the third capture mode mentioned below, and the control signal is generated to acquire the N frame image in the second capture mode or the third capture mode.
  • the method of detecting the above situation can be performed by the detection module.
  • obtaining the N frame image includes: keeping the product of the current sensitivity of the camera and the exposure duration constant, and decreasing the exposure duration according to the preset ratio. And increasing the sensitivity, obtaining the second exposure duration and the second sensitivity; setting the exposure duration and the sensitivity of the camera to the second exposure duration and the second sensitivity respectively, and taking N frames of images; the method further comprises: pressing the current camera The first new image of one frame is captured by the sensitivity and the exposure duration; the second target image is obtained according to the first target image and the first new image.
  • the obtaining the second target image according to the first target image and the first new image comprises: A new image is registered with the reference image or the first target image to obtain a first (two) registration image; and the first (two) registration image and the first target image are obtained according to the first (two) a difference image; obtaining a first (two) ghost weight image according to the first (two) difference image; and the first (two) registration image according to the first (two) ghost weight image
  • the first target image is fused to obtain a first (two) de-ghost image; and the first (two) de-ghost image and the first target image are subjected to weighted fusion of pixel values to obtain the first Two target images.
  • obtaining the N frame image includes: keeping the current sensitivity of the camera unchanged, and setting the current exposure duration to a lower third exposure. And capturing N frames of images; the method further comprising: capturing a second new image according to the current sensitivity and the exposure duration of the camera; obtaining a third according to the first target image and the second new image Target image.
  • obtaining the third target image according to the first target image and the second new image includes: according to the second new image, Processing the first target image according to a preset brightness correction algorithm to obtain a fourth target image; and registering the second new image with the reference image or the fourth target image to obtain a third (four a registration image; obtaining a third (four) difference image according to the third (four) registration map and the fourth target image; obtaining a third (four) according to the third (four) difference image ghost weighting image; merging the third (four) registration image with the fourth target image according to the third (four) ghost weight image to obtain a third (four) de-ghost image; Performing weighted fusion of pixel values according to the third (four) de-ghost image and the fourth target image to obtain a fifth (six) target image; and the fifth (six) target image and the first The target image is subjected to pyramid fusion processing to obtain the third target image
  • the above-mentioned possible technical implementations may be processed by the processor in response to programs and instructions in the memory.
  • the user directly enters the capture mode according to his own choice, such as the first capture mode or the second capture mode or the third capture mode mentioned above;
  • the terminal does not need to detect the framing environment, because each capture mode has a preset parameter rule (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course. Other performance parameters, etc. can also be included; once entering a particular capture mode, the camera automatically adjusts to the corresponding sensitivity and corresponding exposure duration for shooting. Therefore, if the user directly adopts the snap mode, the N pictures are taken to take N pictures with corresponding sensitivity and corresponding exposure time to perform subsequent image processing in the corresponding mode.
  • the action of photographing can be triggered by the user pressing the shutter button.
  • the current sensitivity of the camera and the current exposure duration have been adjusted to the first exposure duration and the first sensitivity.
  • the first exposure duration and the first exposure time N photos are taken at a sensitivity for subsequent processing; in another case, the camera still maintains the current sensitivity and the current exposure time before the user presses the shutter, and the current sensitivity of the camera when the user presses the shutter.
  • the current exposure duration is adjusted to be set to the first exposure duration and the first sensitivity, and N pictures are taken for the first exposure duration and the first sensitivity for subsequent processing.
  • the image may be displayed in the state of the current sensitivity and the current exposure time, or the image may be displayed in the state of the first exposure time and the first sensitivity.
  • the action of photographing can be triggered by the user pressing the shutter button.
  • a first new image of the frame is obtained with the current sensitivity and the current exposure duration, and the current sensitivity and the current exposure duration adjustment are set to the second exposure.
  • the duration and the second sensitivity were taken and N pictures were taken under the conditions, and a total of N+1 pictures were obtained for subsequent processing.
  • the current sensitivity and the current exposure duration are set to the second exposure duration and the second sensitivity, and N pictures are taken under the condition, and then restored to the location.
  • a first new image of one frame is obtained under the condition of the current sensitivity and the current exposure duration; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be displayed in the state of the second exposure duration and the second sensitivity.
  • the action of photographing can be triggered by the user pressing the shutter button.
  • the second new image of one frame is obtained with the current sensitivity and the current exposure duration; and the current sensitivity of the camera is kept unchanged, and the current exposure duration is set to be lower. Three exposure durations, and N pictures were taken under this condition, and a total of N+1 pictures were obtained for subsequent processing.
  • the current exposure duration is set to a lower third exposure duration, and N pictures are taken under the condition, and then the current exposure time is restored to the current sensitivity.
  • a second new image of one frame is obtained under the condition of degree; a total of N+1 pictures are obtained for subsequent processing.
  • the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be performed in the state of the third exposure duration and the current sensitivity.
  • an embodiment of the present invention provides a terminal device, where the terminal device includes a memory, a processor, a bus, and a camera, where the memory, the camera, and the processor are connected by using the bus; wherein, the camera is used to Acquiring an image signal under control of the processor; storing a computer program and instructions; the processor is configured to invoke the computer program and instructions stored in the memory, to cause the terminal device to perform any of the above possibilities Design method.
  • the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network;
  • the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, 4G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
  • the above method, device and device can be applied to a scene in which the camera software provided by the terminal is used for shooting; or can be applied to a scene in which a third-party camera software is used for shooting in the terminal; the shooting includes normal shooting, self-timer, video telephony, and video conference. , VR shooting, aerial photography and other shooting methods.
  • the terminal in the embodiment of the present invention may include multiple camera modes, such as a simple capture mode, or a camera-only mode that determines whether to capture after the scene condition is detected; when the terminal is in the capture mode, for the motion scene Or, if the signal-to-noise ratio is high, it is difficult to take a clear photo scene.
  • This program has been able to take high-definition photos, greatly improving the user's photo experience.
  • 1 is a schematic structural view of a terminal
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for de-ghosting an image according to an embodiment of the present invention
  • FIG. 4 is a flowchart of a capture system according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another image processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of another image processing method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the terminal may be a device that provides photographing and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to the wireless modem, such as a digital camera, a SLR camera, Mobile phones (or “cellular" phones) can be portable, pocket-sized, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PCs, Personal Computers), PDAs (Personal Digital Assistants, Personal digital assistant), POS (Point of Sales), on-board computer, drone, aerial camera, etc.
  • a digital camera a SLR camera
  • Mobile phones or “cellular” phones
  • PCs personal computers
  • PDAs Personal Digital Assistants
  • POS Point of Sales
  • on-board computer drone
  • aerial camera etc.
  • FIG. 1 shows an alternative hardware structure diagram of the terminal 100.
  • the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, a microphone 162, a processor 170, an external interface 180, a power supply 190, and the like.
  • the camera 150 has at least two.
  • the camera 150 is used for capturing images or videos, and can be triggered by an application instruction to realize a photographing or photographing function.
  • the camera may include an imaging lens, a filter, an image sensor, a focus anti-shake motor, and the like.
  • the light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
  • the imaging lens is mainly used for collecting and reflecting light emitted or reflected by all objects (also referred to as objects to be photographed) in the photographing angle of view; the filter is mainly used to remove unnecessary light waves in the light (for example, light waves other than visible light)
  • the image sensor is mainly used for photoelectrically converting the received optical signal, converting it into an electrical signal, and inputting it to the processing 170 for subsequent processing.
  • FIG. 2 is merely an example of a portable multi-function device, and does not constitute a limitation of the portable multi-function device, and may include more or less components than those illustrated, or may combine some components, or different. Parts.
  • the input unit 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the portable multifunction device.
  • the input unit 130 may include a touch screen 131 and other input devices 132.
  • the touch screen 131 can collect touch operations on or near the user (such as the user's operation on the touch screen or near the touch screen using any suitable object such as a finger, a joint, a stylus, etc.), and drive the corresponding according to a preset program. Connection device.
  • the touch screen can detect a user's touch action on the touch screen, convert the touch action into a touch signal and send the signal to the processor 170, and can receive and execute a command sent by the processor 170; the touch signal includes at least a touch Point coordinate information.
  • the touch screen 131 can provide an input interface and an output interface between the terminal 100 and a user.
  • touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 130 may also include other input devices.
  • other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control button 132, switch button 133, etc.), trackball, mouse, joystick, and the like.
  • the display unit 140 can be used to display information input by a user or information provided to a user and various menus of the terminal 100.
  • the display unit is further configured to display an image acquired by the device using the camera 150, including a preview image, an initial image captured, and a target image processed by a certain algorithm after the shooting.
  • the touch screen 131 may cover the display panel 141.
  • the touch screen 131 detects a touch operation on or near it, the touch screen 131 transmits to the processor 170 to determine the type of the touch event, and then the processor 170 displays the panel according to the type of the touch event.
  • a corresponding visual output is provided on 141.
  • the touch screen and the display unit can be integrated into one component to implement the input, output, and display functions of the terminal 100.
  • the touch display screen represents the function set of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also function as two separate components.
  • the memory 120 can be used to store instructions and data, the memory 120 can mainly include a storage instruction area and a storage data area, the storage data area can store an association relationship between the joint touch gesture and the application function; the storage instruction area can store an operating system, an application, Software units such as instructions required for at least one function, or their subsets, extension sets.
  • a non-volatile random access memory can also be included; providing hardware, software, and data resources in the management computing device to the processor 170, supporting the control software and applications. Also used for the storage of multimedia files, as well as the storage of running programs and applications.
  • the processor 170 is a control center of the terminal 100, and connects various parts of the entire mobile phone by various interfaces and lines, and executes various kinds of the terminal 100 by operating or executing an instruction stored in the memory 120 and calling data stored in the memory 120. Function and process data to monitor the phone as a whole.
  • the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170.
  • the processors, memories can be implemented on a single chip, and in some embodiments, they can also be implemented separately on separate chips.
  • the processor 170 can also be configured to generate corresponding operational control signals, send to corresponding components of the computing processing device, read and process data in the software, and in particular read and process the data and programs in the memory 120 to enable Each function module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
  • the radio frequency unit 110 can be used for receiving and transmitting signals during transmission and reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is performed by the processor 170. In addition, the uplink data is designed to be sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • the radio unit 110 can also communicate with network devices and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the terminal 100.
  • the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 is used to collect the sound signal, and can also convert the collected sound signal.
  • the electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, transmitted to the terminal, for example, via the radio frequency unit 110, or outputted to the memory 120 for further processing.
  • the audio circuit can also include a headphone jack 163 for providing a connection interface between the audio circuit and the earphone.
  • the terminal 100 also includes a power source 190 (such as a battery) for powering various components.
  • a power source 190 such as a battery
  • the power source can be logically coupled to the processor 170 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the terminal 100 further includes an external interface 180, which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
  • an external interface 180 which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
  • the terminal 100 may further include a flash, a wireless fidelity (WiFi) module, a Bluetooth module, various sensors, and the like, and details are not described herein. All of the methods described below can be applied to the terminal shown in FIG. 1.
  • WiFi wireless fidelity
  • Bluetooth Bluetooth
  • an embodiment of the present invention provides an image processing method.
  • the specific processing method includes the following steps:
  • Step 31 Obtain an N frame image, where N is a positive integer greater than 2;
  • Step 32 Determine a reference image in the N frame image, and the remaining N-1 frame images are to be processed images; if N is 20, the first frame image is a reference image, and the remaining 19 frames are images to be processed.
  • i in 33 may be any one of 1-19;
  • Step 33 Obtain an N-1 frame de-ghost image according to the N-1 frame to be processed image; specifically, step s331-s334 may be performed on the ith frame in the N-1 frame; wherein i may be taken no more than For all positive integers of N-1, in some embodiments, only the M frames to be processed may be taken to obtain an M frame de ghost image, and M is a positive integer smaller than N-1; , see Figure 3;
  • S331 register an ith frame image and the reference image to obtain an i-th registration image
  • Step 34 Obtain a first target image according to the reference image and the N-1 frame de-ghost image; specifically, performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image, and an average value
  • the operation can also include some corrections to the average, or an average of the absolute values, and the like.
  • step 31 receives the shooting instruction under the current parameter setting, and continuously takes N pictures, and can be used as the first capture mode, the second capture mode, and the third capture mode.
  • An alternative to step 31 in the middle Specifically, the user directly enters the capture mode according to his own choice, as the first capture mode or the second capture mode or the third capture mode mentioned in the following; at this time, the terminal does not need to detect the framing environment, because each capture mode There will be a preset parameter rule (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course, may include other performance parameters, etc.; The capture mode, the camera will automatically adjust to the corresponding sensitivity and the corresponding exposure time to shoot. Therefore, if the user directly adopts the capture mode, the N pictures are taken, and the N pictures are taken by the sensitivity corresponding to the capture mode and the corresponding exposure time to perform subsequent image processing in the corresponding mode.
  • the camera of the terminal is in the automatic mode or the smart mode, the camera needs to detect the framing environment, if the framing image of the camera is detected as a moving image; and the current exposure time of the camera is detected to exceed the safe time; and the framing is detected; The environment is extremely bright; the first capture mode proposed in the present invention is adopted. If the framing image of the camera is detected as a moving image; and the current exposure time of the camera is detected to exceed the safe duration; and the framing environment is detected to be in a medium-high brightness environment, the second capture mode or the third proposed in the present invention is adopted. Capture mode. If none of the above scenarios are detected, you can use any of the camera-supported camera modes to shoot. A specific photographing process can be seen in Figure 4.
  • the "camera” in this document generally refers to a system capable of performing a photographing function in a terminal device, including a camera, and a necessary processing module and a storage module to complete image acquisition and transmission, and may also include some processing function modules.
  • the “current exposure duration” and “current sensitivity” respectively refer to the exposure duration and sensitivity corresponding to the preview of the data stream of the framing image under initial conditions. Usually related to the camera's own properties and initial settings. In a possible design, if the terminal does not detect the camera's framing environment, or detects the framing environment but does not detect any of the following three situations, the camera previews the framing image data stream corresponding to The exposure duration and sensitivity are also "current exposure duration" and "current sensitivity”.
  • Case 1 The view image of the camera is a moving image
  • Case 2 The current exposure time of the camera is detected to exceed the safe time
  • Case 3 The framing environment is detected as a very bright environment or a moderately bright environment.
  • the framing image as a moving image
  • performing motion detection on the preview data stream analyzing the photo preview stream, and detecting each time x frames (the number of interval frames x is adjustable, x is a positive integer), each time
  • the difference between the current detected frame image and the last detected frame image is compared at the time of detection.
  • the two images may be divided into several regions by the same division manner, for example, 64 regions per image, and if there is a large difference between one or more regions, it is regarded as a motion scene.
  • the current exposure time and the safety shutter can be obtained by acquiring camera parameters.
  • a secure shutter is a property of a terminal camera.
  • the current exposure time is greater than the safety shutter will consider the capture mode.
  • the iso_th1 and expo_th1 can be determined according to the specific needs of the user; the medium highlight scene definition: iso_th1 ⁇ ISO ⁇ iso_th2, and expo_th1 ⁇ expo ⁇ expo_th2, the same iso_th2 and expo_th2 can also be based on The user's specific needs are determined; the low-light scene definition: iso_th2 ⁇ ISO and expo_th2 ⁇ expo; it should be understood that the division of these intervals is determined by the user's needs, and there are cases where discontinuities or coincidences are allowed between these value intervals.
  • the first capture mode, the second capture mode, and the third capture mode are described in detail below.
  • Step 31 is specifically: obtaining parameters such as the current sensitivity of the camera and the current exposure duration, keeping the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration according to the preset ratio, and increasing the sensitivity to obtain the first exposure duration and
  • the first sensitivity such as the first exposure time is 1/2 or 1/4 of the original exposure time, and the first sensitivity is corresponding to 2 or 4 times the original sensitivity, and the specific ratio may be set according to the user's needs or
  • the rule is adjusted; the exposure time and the sensitivity of the camera are set to the first exposure time length and the first sensitivity, respectively, and N frames of images are taken. The following steps are to perform noise reduction on the N frames.
  • the action of taking a picture can be triggered by the user pressing the shutter button.
  • the current sensitivity of the camera and the current exposure duration have been adjusted to the first exposure duration and the first sensitivity.
  • the first exposure duration and the first exposure time N photos are taken at a sensitivity for subsequent processing; in another case, the camera still maintains the current sensitivity and the current exposure time before the user presses the shutter, and the current sensitivity of the camera when the user presses the shutter.
  • the current exposure duration is adjusted to be set to the first exposure duration and the first sensitivity, and N pictures are taken for the first exposure duration and the first sensitivity for subsequent processing.
  • the image may be displayed in the state of the current sensitivity and the current exposure time, or the image may be displayed in the state of the first exposure time and the first sensitivity.
  • Step 32 is specifically: determining one reference image in the N frame image, and the remaining N-1 frame images are to be processed images.
  • the first frame image or the middle frame image of the N frame images is taken as a reference image.
  • the subsequent steps are described by taking the first frame image as an example.
  • Step 33 is specifically: obtaining an N-1 frame de-ghost image according to the image to be processed of the N-1 frame. This step can be subdivided into many substeps. Step s331-s334 may be performed on the ith frame in the remaining N-1 frames; wherein i may take all positive integers that are not greater than N-1, and in the specific implementation process, the local frame may also be taken to obtain the local frame degugos. For the sake of convenience of explanation, in the present embodiment, the de-ghost image is obtained by all the frames in the N-1 frame.
  • S331 is specifically: registering the ith frame image with the reference image to obtain an i-th registration image.
  • the specific registration method may be: (1) performing feature extraction on the i-th frame image and the reference image in the same manner, obtaining a series of feature points, and characterizing each feature point; (2) The i-frame image is matched with the feature points of the reference image; a series of feature point pairs are obtained, and the ransac algorithm (prior art) is used for bad point culling; (3) the two image images are obtained by solving the matched feature point pairs.
  • a matrix homoography matrix, affine matrix, etc.
  • the ith frame image is aligned with the reference image to obtain a registration map of the ith frame.
  • mature open source algorithms can be used to call this step, so it will not be expanded in detail here.
  • S332 is specifically: obtaining an ith difference image according to the i-th registration image and the reference image. Specifically, the ith frame registration map and the reference image are obtained for pixel-by-pixel difference, and the difference between the two images is obtained according to the absolute value of each difference.
  • S333 is specifically: obtaining an i-th ghost weight image according to the ith difference image; specifically, a pixel point exceeding a preset threshold in the difference map is set to M (eg, 255), and a pixel point not exceeding a preset threshold is set to N ( For example, 0), and Gaussian smoothing of the re-assigned difference map, the i-th ghost weight image can be obtained.
  • M e.g. 255
  • N For example, 0
  • S334 is specifically: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain the i-th frame de-ghost image.
  • the i-th frame registration map (image_i in the following formula) and the reference image (image_1 in the following formula) are fused pixel by pixel, that is, the first
  • the i frame removes ghost images (no_ghost_mask).
  • the fusion formula is as follows, where m, n represent pixel coordinates:
  • Step 34 is specifically: performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  • a mean operation is performed on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  • the first target image is the final image obtained when the terminal executes the photographing mode.
  • the second capture mode is more complicated than the first capture mode. Some of the steps are the same as those of the capture mode 1. A flowchart of the second capture mode can be seen in FIG. 5.
  • Step 41 According to the current sensitivity of the camera and the exposure duration, one frame of the first new image is taken; and the exposure time is decreased according to the preset ratio and the sensitivity is increased to obtain the second exposure duration and the second sensitivity, and the exposure duration of the camera is The sensitivity is set to the second exposure duration and the second sensitivity, respectively, and N frames of images are taken;
  • the action of taking a picture can be triggered by the user pressing the shutter button.
  • a first new image of the frame is obtained with the current sensitivity and the current exposure duration, and the current sensitivity and the current exposure duration adjustment are set to the second exposure.
  • the duration and the second sensitivity were taken and N pictures were taken under the conditions, and a total of N+1 pictures were obtained for subsequent processing.
  • the current sensitivity and the current exposure duration are set to the second exposure duration and the second sensitivity, and N pictures are taken under the condition, and then restored to the location.
  • a first new image of one frame is obtained under the condition of the current sensitivity and the current exposure duration; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be displayed in the state of the second exposure duration and the second sensitivity.
  • Step 42 Obtain the first target image by using the first capture mode scheme (step 31 - step 34) for the N frame image obtained in the previous step, wherein it should be understood that the second sensitivity, the second exposure duration, and some of the above may be The adjusted threshold may change accordingly due to changes in the scene;
  • Step 43 Obtain a second target image according to the first target image and the first new image.
  • it may include but is not limited to the following two implementation modes:
  • S4312 obtaining a first difference image according to the first registration image and the first target image
  • S4315 performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain a second target image, specifically, including time domain fusion s4315(1), time domain fusion s4315(3), and frequency domain Four implementations of s4315(2) and frequency domain fusion s4315(4) are combined.
  • Time domain fusion s4315(1) Guide filtering the first target image and the first de-ghost image respectively, filtering out short frame information (existing mature algorithm), and recording it as fusion_gf and noghost_gf.
  • the fusion_gf and noghost_gf are pixel-weighted and fused.
  • the specific fusion formula is as follows:
  • v is a constant noise corresponding to the current ISO gear position, and is a constant value
  • W is a weight value
  • the value range is [0, 1).
  • the target details are the larger values of the details filtered by the first target image and the first de-ghost image in the pixel-directed filtering. Increase the image detail to get the second target image.
  • Time domain fusion s4315 (3) Sampling the first target image (denoted as fusion) and the first de-ghost image (denoted as noghost), respectively, the width and height are downsampled twice, respectively, and the first target image is obtained.
  • the sampled map and the first de-ghost image downsampled graph are recorded as fusionx4 and noghostx4.
  • the fusionx4 and noghostx4 are upsampled, and the width and height are upsampled by 2 times, and two images with the same size as before the unsampling are obtained, which are denoted as fusion' and noghost'.
  • the sampling error map of the first target image is obtained, which is denoted as fusion_se; the difference between the noghost and the noghost' pixel-by-pixel points is obtained, and the sampling error map of the first de-ghost image is obtained, which is recorded as Noghost_se.
  • Guided filtering existing mature algorithm
  • fusion_gf and noghost_gf are weighted by pixel values to obtain a fused image, which is denoted as Fusion.
  • Fusion is the same as the formula in s4315(1).
  • the fused image is added back to the point where the first target image is filtered out in the directional filtering.
  • the image is upsampled, and the width and height are upsampled by 2 times, which is recorded as FusionUp.
  • the two sampling error maps of fusion_se and noghost_se are selected point by point, and added to the FusionUp point by point to increase the image detail to obtain the second target image.
  • Frequency domain fusion s4315(2) Guide filtering of the first target image and the first de-ghost image image respectively (existing mature algorithm); respectively performing Fourier transform on the two filtered images, and obtaining corresponding Amplitude; the amplitude ratio is used as the weight, and the Fourier spectrum corresponding to the two images is fused.
  • the specific fusion formula is similar to the time domain fusion.
  • the inverse spectrum of the fused spectrum is inversely transformed to obtain a fused image. Adding back to the target details pixel by pixel for the fused image. For any pixel, the target details are the larger values of the details filtered by the first target image and the first de-ghost image in the pixel-directed filtering. Increase the image detail to get the second target image.
  • Frequency domain fusion s4315 Sampling the first target image (denoted as fusion) and the first de-ghost image (denoted as noghost), respectively, the width and height are downsampled twice, respectively, and the first target image is obtained.
  • the sampled map and the first de-ghost image downsampled graph are recorded as fusionx4 and noghostx4.
  • the fusionx4 and noghostx4 are upsampled, and the width and height are upsampled by 2 times, and two images with the same size as before the unsampling are obtained, which are denoted as fusion' and noghost'.
  • the sampling error map of the first target image is obtained, which is denoted as fusion_se; the difference between the noghost and the noghost' pixel-by-pixel points is obtained, and the sampling error map of the first de-ghost image is obtained, which is recorded as Noghost_se.
  • Guided filtering of fusionx4 and noghostx4 denoted as fusion_gf and noghost_gf.
  • Fourier transform is performed on the two filtered images respectively, and the corresponding amplitude is obtained.
  • the amplitude ratio is used as the weight to fuse the Fourier spectrum corresponding to the two images.
  • the specific fusion formula is similar to the time domain fusion.
  • the inverse spectrum of the fused spectrum is inversely transformed to obtain a fused image.
  • the merged image is added back to the pixel of the first target image by pixel-by-pixel point, and the added image is double-sampled in width and height, which is recorded as FusionUp.
  • the two sampling error maps of fusion_se and noghost_se are selected point by point, and added to FusionUp point by point to increase the image detail to obtain the second target image.
  • s4311-s4314 is the same as the specific algorithm involved in s331-s334, mainly the replacement of the input image, and will not be described here.
  • S4325 performing weighted fusion of pixel values according to the second de-ghost image and the first target image to obtain a second target image.
  • the method may include two implementations of time domain fusion and frequency domain fusion, and may refer to the foregoing s4315 (1). ), s4315 (3) and frequency domain fusion s4315 (2), s4315 (4), as the algorithm, only the replacement of the input image, no longer repeat here.
  • the third capture mode is more complicated than the first capture mode, and can be understood as a kind of replacement of the second capture mode to some extent, and the second and third capture modes are commonly used in the medium highlight scene.
  • a flowchart of the third capture mode can be seen in FIG. 6.
  • Step 51 According to the current sensitivity of the camera and the current exposure duration, one frame of the second new image is taken; and the current sensitivity of the camera is kept unchanged, the current exposure duration is set to a lower third exposure duration, and N frames of images are captured;
  • the action of taking a picture can be triggered by the user pressing the shutter button.
  • the second new image of one frame is obtained with the current sensitivity and the current exposure duration; and the current sensitivity of the camera is kept unchanged, and the current exposure duration is set to be lower. Three exposure durations, and N pictures were taken under this condition, and a total of N+1 pictures were obtained for subsequent processing.
  • the current exposure duration is set to a lower third exposure duration, and N pictures are taken under the condition, and then the current exposure time is restored to the current sensitivity.
  • a second new image of one frame is obtained under the condition of degree; a total of N+1 pictures are obtained for subsequent processing.
  • the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be performed in the state of the third exposure duration and the current sensitivity.
  • Step 52 Obtain the first target image by using the first snapshot mode scheme (step 31 - step 34) for the N frame image obtained in the previous step, wherein it should be understood that the third exposure duration and some adjustable thresholds may be due to Changes in the scene produce corresponding changes.
  • Step 53 Obtain a third target image according to the first target image and the second new image.
  • it may include but is not limited to the following two implementation modes:
  • S5316 performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image, and the fusion algorithm can be referred to as time domain fusion s4315(1), s4315(3), and frequency domain fusion s4315(2) ), any one of s4315(4);
  • S5317 performing pyramid fusion processing on the fifth target image and the first target image to obtain a third target image; specifically, constructing a Laplacian pyramid of the fifth target image and the first target image respectively, and constructing a weight map of the image fusion And normalizing and smoothing the weight map, constructing a Gaussian pyramid on the normalized and smoothed weight map, and merging the pyramids of all the images on the corresponding layer according to the weight setting of each layer of pyramids to obtain a synthetic pyramid; At the top of the Plass pyramid, the synthetic pyramid is reconstructed according to the inverse process of the pyramid generation, and each layer of information is added one by one to restore the fused image.
  • the pyramid fusion process is an existing mature algorithm and will not be described in detail.
  • s5312-s5316 can be referred to s4311-s4315; it will not be described here.
  • S5326 performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image, and the fusion algorithm can refer to time domain fusion s4315(1), s4315(3), and frequency domain fusion s4315(2) One of s4315(4);
  • S5327 performing pyramid fusion processing on the fifth target image and the first target image to obtain a third target image; specifically, constructing a Laplacian pyramid of the fifth target image and the first target image respectively, and constructing a weight map of the image fusion And normalizing and smoothing the weight map, constructing a Gaussian pyramid on the normalized and smoothed weight map, and merging the pyramids of all the images on the corresponding layer according to the weight setting of each layer of pyramids to obtain a synthetic pyramid; At the top of the Plass pyramid, the synthetic pyramid is reconstructed according to the inverse process of the pyramid generation, and each layer of information is added one by one to restore the fused image.
  • the pyramid fusion process is an existing mature algorithm and will not be described in detail.
  • the present invention provides an image processing method capable of providing a snap mode for a camera.
  • the user can capture clear images in different scenes, satisfy the user's snapping psychology, and can capture and record his life scene anytime and anywhere, greatly improving the user experience.
  • the embodiment of the present invention provides an image processing apparatus 700.
  • the apparatus 700 can be applied to various types of photographing apparatuses. As shown in FIG. 7, the apparatus 700 includes an obtaining module 701 and a determining module. 702, go ghost module 703, mean operation module 704, wherein:
  • the obtaining module 701 is configured to obtain an N frame image.
  • the obtaining module 701 can be implemented by the processor invoking a program instruction in the memory to control the camera to acquire an image.
  • the determining module 702 is configured to determine one reference image in the N frame image, and the remaining N-1 frame images are to be processed images.
  • the determining module 702 can be implemented by a processor invoking a program instruction in a memory or an externally input program instruction.
  • the ghosting module 703 is configured to perform the following steps 1 - 4 on the ith frame image in the N-1 frame to be processed image to obtain an N-1 frame de ghost image, where the i is not greater than N- All positive integers of 1;
  • Step 1 register the image of the ith frame with the reference image to obtain an i-th registration image
  • Step 2 Obtain an ith difference image according to the ith registration image and the reference image
  • Step 3 Obtain an i-th ghost weight image according to the ith difference image
  • Step 4 merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image;
  • the de-ghosting module 703 can be implemented by a processor, and can perform corresponding calculations by calling data and algorithms in the local storage or the cloud server.
  • the averaging operation module 704 is configured to perform a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  • the averaging operation module 704 can be implemented by a processor, and can be invoked by using a local memory or a cloud. The data and algorithms in the server are implemented accordingly.
  • the obtaining module 701 is specifically configured to perform the method mentioned in step 31 and the method that can be replaced by the same; the determining module 702 is specifically configured to perform the method mentioned in step 32 and the method that can be replaced equally; The de-ghosting module 703 is specifically configured to perform the method mentioned in the step 33 and the method that can be replaced equally; the averaging operation module 704 is specifically configured to perform the method mentioned in the step 34 and the method that can be replaced equally.
  • the above specific method embodiments and the explanations and expressions in the embodiments are also applicable to the method execution in the device.
  • the device 700 further includes a detection module 705, and the detection module 705 is configured to control the acquisition module to acquire an N frame image according to the following first acquisition manner when detecting that the following three situations exist simultaneously;
  • Case 1 The framing image of the camera is detected as a moving image
  • Case 2 The current exposure time of the camera is detected to exceed the safe duration
  • Case 3 It is detected that the camera is in an extremely bright environment, that is, the current sensitivity is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  • the first acquisition mode keep the product of the current sensitivity of the camera and the exposure duration constant, reduce the exposure duration and increase the sensitivity according to the preset ratio, and obtain the first exposure duration and the first sensitivity; the exposure time and sensitivity of the camera The first exposure time and the first sensitivity are respectively set, and an N-frame image is taken.
  • the detecting module 705 is configured to control the acquiring module to acquire an N frame image according to the following second obtaining manner or the third obtaining manner when detecting that the following three situations exist simultaneously;
  • Case 1 The view image of the camera is detected as a moving image; or,
  • Case 3 It is detected that the camera is in a medium-high brightness environment, that is, the current sensitivity is in the first preset threshold interval, and the current exposure time is in the second preset threshold interval.
  • the second acquisition mode keeping the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration and increasing the sensitivity according to the preset ratio, obtaining the second exposure duration and the second sensitivity; and taking the exposure time and sensitivity of the camera Setting the second exposure duration and the second sensitivity respectively, and capturing N frames of images;
  • the third acquisition mode shooting a second new image according to the current sensitivity and the exposure duration of the camera; keeping the current sensitivity of the camera unchanged, setting the current exposure duration to a lower third exposure duration; and taking N Frame image.
  • the apparatus 700 may further include a fusion module 706, configured to obtain a second target image according to the first target image and the first new image, or to obtain a first image according to the first target image and the second new image. Three target images.
  • the first new image is registered with the reference image to obtain a first registration image; the first difference image is obtained according to the first registration image and the first target image; and the first difference image is obtained according to the first difference image.
  • a ghosting weight image combining the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image; according to the first de-ghost image and the first target image
  • a weighted fusion of pixel values is performed to obtain a second target image.
  • it is used to perform the method mentioned in the method (1) of the step 43 and the method which can be equivalently replaced.
  • the first new image is registered with the first target image to obtain a second registration image; and the second difference image is obtained according to the second registration image and the first target image;
  • the second difference image obtains a second ghost image;
  • the second registration image is merged with the first target image according to the second ghost weight image to obtain a second de-ghost image;
  • the second de-ghost image and the first target image perform weighted fusion of pixel values to obtain the second target image.
  • it is used to perform the method mentioned in the method (2) of the step 43 and the method which can be equivalently replaced.
  • the third registration image is merged with the fourth target image to obtain a third de-ghost image; and the third de-ghost image and the fourth target image are subjected to weighted fusion of pixel values to obtain a fifth target image.
  • performing pyramid fusion processing on the fifth target image and the first target image to obtain the third target image is used to perform the method mentioned in the method (1) of the step 53 and the method which can be equivalently replaced.
  • each capture mode will have Pre-set parameter rules (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course, may include other performance parameters, etc.; once entering a specific capture Mode, the acquisition module will automatically adjust to the corresponding sensitivity and the corresponding exposure time to shoot. Therefore, if the user directly adopts the capture mode, the acquisition module will take N pictures with corresponding sensitivity and corresponding exposure duration to perform subsequent image processing in the corresponding mode.
  • Pre-set parameter rules pre-stored in the terminal local or cloud server
  • the above detection module 705 and the fusion module 706 can be implemented by a processor calling a program instruction in a memory or a program instruction in the cloud.
  • the present invention provides an image processing apparatus 700.
  • the user can capture a clear image in different scenes, satisfy the user's snapping psychology, and can capture and record his life scene anytime and anywhere, thereby greatly improving the user experience.
  • each module in the above device 700 is only a division of a logical function, and the actual implementation may be integrated into one physical entity in whole or in part, or may be physically separated.
  • each of the above modules may be a separately set processing component, or may be integrated in one chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and processed by one of the processors.
  • the component calls and executes the functions of each of the above modules.
  • the individual modules can be integrated or implemented independently.
  • the processing elements described herein can be an integrated circuit chip with signal processing capabilities.
  • each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or an instruction in a form of software.
  • the processing element may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or one or more microprocessors (English: digital signal processor, referred to as: DSP), or one or more field programmable gate arrays (English: Field-programmable gate array, referred to as: FPGA).
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field-programmable gate array
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

Discloses are an image processing method and device and an apparatus. The method comprises: acquiring a number N of frames of images; determining, among the N frames of images, a reference image, and the remaining N-1 frames of images are to-be-processed images; obtaining, according to the N-1 frames of to-be-processed images, N-1 frames of ghost reduction images; and performing mean operation on the reference image and the N-1 frames of ghost reduction images to obtain a first target image. The method can provide a capturing mode for a camera such that a user can capture clear images in different scenes, and the user experience can be improved.

Description

一种图像处理方法、装置与设备Image processing method, device and device 技术领域Technical field
本发明涉及终端技术领域,尤其涉及一种图像处理方法、装置与设备。The present invention relates to the field of terminal technologies, and in particular, to an image processing method, apparatus, and device.
背景技术Background technique
现实拍照中,在拍摄运动物体时,都会存在或轻或重的运动模糊,如若拍摄玩耍的孩子,奔跑的宠物等运动速度较快的目标时,更是会出现较为严重的拖影现象。在按下快门键之后的曝光过程中,若存在手持抖动,还会使图像模糊现象更加严重。In reality photography, when shooting a moving object, there will be light or heavy motion blur. If the child playing the game, the running pet, etc., the target with faster movement speed, there will be more serious smear phenomenon. In the exposure process after the shutter button is pressed, if there is hand-held jitter, the image blurring phenomenon is further aggravated.
为了提升拍摄运动物体的清晰度,降低手抖影响,专业摄影师常常配备昂贵的大光圈,光学防抖镜头,三脚架等辅助拍照设备。但对于无时无刻,随时随地都在拍照的手机用户来说,这些方法却因为价格高,不便携等原因较难以普及。In order to improve the clarity of shooting moving objects and reduce the impact of hand shake, professional photographers are often equipped with expensive large apertures, optical anti-shake lenses, tripods and other auxiliary camera equipment. However, for mobile phone users who are taking photos anytime and anywhere, these methods are more difficult to popularize because of their high price and lack of portability.
为了解决运动模糊以及可能会同时存在的亮度、噪声等拍照时比较常见的问题,本发明提出一套针对运动场景抓拍的拍照方法。In order to solve the problems that are common in motion blur and possible brightness, noise, etc., the present invention proposes a set of photographing methods for capturing motion scenes.
发明内容Summary of the invention
本发明实施例提供一种图像处理方法、装置与设备,可以为用户提供抓拍机制,在处理运动场景时,能够抓拍到高清晰度的图像,提高用户的拍照体验。The embodiment of the invention provides an image processing method, device and device, which can provide a capture mechanism for a user, and can capture high-definition images when processing a motion scene, thereby improving the user's photographing experience.
本发明实施例提供的具体技术方案如下:The specific technical solutions provided by the embodiments of the present invention are as follows:
第一方面,本发明实施例提供一种图像处理方法,方法包括:获得N帧图像;在N帧图像中确定一个参考图像,其余N-1帧图像为待处理图像;根据N-1帧待处理图像得到N-1帧去鬼影图像;对参考图像和N-1帧去鬼影图像进行均值运算得到第一目标图像;其中,根据所述N-1帧待处理图像得到N-1帧去鬼影图像包括:对于N-1帧待处理图像中的第i帧图像执行步骤1-步骤4,i取遍不大于N-1的所有正整数,In a first aspect, an embodiment of the present invention provides an image processing method, including: obtaining an N-frame image; determining a reference image in the N-frame image, and resting the N-1 frame image as a to-be-processed image; Processing the image to obtain an N-1 frame de-ghost image; performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image; wherein, the N-1 frame is obtained according to the N-1 frame to be processed image De-ghosting image includes: performing step 1 - step 4 for the ith frame image in the image to be processed of the N-1 frame, i taking all positive integers not greater than N-1,
步骤1:将第i帧图像与参考图像进行配准,得到第i配准图像;步骤2:根据第i配准图像与参考图像得到第i差异图像;步骤3:根据第i差异图像得到第i鬼影权重图像;步骤4:根据第i鬼影权重图像,将第i配准图像与参考图像进行融合,得到第i帧去鬼影图像。Step 1: register the image of the i-th frame with the reference image to obtain an i-th registration image; step 2: obtain an i-th difference image according to the i-th registration image and the reference image; and step 3: obtain the first image according to the i-th difference image i ghost ghost weight image; step 4: according to the i-th ghost weight image, the i-th registration image is merged with the reference image to obtain the i-th frame de-ghost image.
第二方面,本发明实施例提供一种图像处理装置,装置包括:获取模块,用于获得N帧图像;确定模块,用于在N帧图像中确定一个参考图像,其余N-1帧图像为待处理图像;去鬼影模块,用于对于N-1帧待处理图像中的第i帧图像执行以下步骤1-步骤4,以得到N-1帧去鬼影图像,i取遍不大于N-1的所有正整数;步骤1:将第i帧图像与参考图像进行配准,得到第i配准图像;步骤2:根据第i配准图像与参考图像得到第i差异图像;步骤3:根据第i差异图像得到第i鬼影权重图像;步骤4:根据第i鬼影权重图像,将第i配准图像与参考图像进行融合,得到第i帧去鬼影图像;均值运算模块,对参考图像和N-1帧去鬼影图像进行均值运算得到第一目标图像。In a second aspect, an embodiment of the present invention provides an image processing apparatus, where the apparatus includes: an acquiring module, configured to obtain an N-frame image; and a determining module, configured to determine a reference image in the N-frame image, and the remaining N-1 frame images are An image to be processed; a ghosting module, configured to perform the following steps 1 - 4 on the ith frame image in the image to be processed of the N-1 frame to obtain an N-1 frame de ghost image, where the i is not greater than N All positive integers of -1; Step 1: Register the image of the ith frame with the reference image to obtain the i-th registration image; Step 2: Obtain the ith difference image according to the i-th registration image and the reference image; Step 3: Obtaining an i-th ghost weight image according to the i-th difference image; step 4: fusing the i-th registration image and the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image; the mean operation module, The reference image and the N-1 frame go ghost image are averaged to obtain a first target image.
根据本发明实施例提供的上述方法和装置的技术方案,用户可以在运动场景中, 依旧能对运动中的图像进行抓拍,并能得到高清晰度的图片。According to the technical solution of the above method and apparatus provided by the embodiment of the present invention, the user can still capture the image in motion in the motion scene, and can obtain a high-definition picture.
根据第一方面或者第二方面,在一种可能的设计中,在获得N帧图像之前,方法还包括:检测到以下三种情形同时存在时,产生控制信号,所述控制信号用于指示获取N帧图像;情形1:检测到相机的取景图像为运动图像;情形2:检测到相机的当前曝光时长超过安全时长;情形3:检测到相机处于极高亮场景中,相应的,相机的当前感光度小于第一预设阈值,且所述当前曝光时长小于第二预设阈值。其中,作为实施例的补充,上述3种情形可以检测到至少一个即可产生相应的控制信号。例如检测到情形3,表示极高亮条件,则智能切换到下文提到的第一抓拍模式,产生控制信号用第一抓拍模式的方式获取N帧图片。检测上述情形的方法可以由检测模块执行。According to the first aspect or the second aspect, in a possible design, before obtaining the N frame image, the method further comprises: when detecting that the following three situations exist simultaneously, generating a control signal, the control signal is used to indicate acquisition N frame image; Case 1: The framing image of the camera is detected as a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe duration; Case 3: The camera is detected to be in a very bright scene, correspondingly, the current camera The sensitivity is less than the first preset threshold, and the current exposure duration is less than the second preset threshold. In addition, as a supplement to the embodiment, the above three situations can detect at least one to generate a corresponding control signal. For example, if the situation 3 is detected, indicating a very high brightness condition, the smart switch to the first capture mode mentioned below, and the control signal is generated to acquire the N frame picture in the first capture mode. The method of detecting the above situation can be performed by the detection module.
根据第一方面或者第二方面,在一种可能的设计中,获得N帧图像包括:保持相机当前的感光度和曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度;将相机的曝光时长和感光度分别设置为所述第一曝光时长和所述第一感光度,拍摄N帧图像。According to the first aspect or the second aspect, in one possible design, obtaining the N-frame image includes: maintaining the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration and increasing the sensitivity according to a preset ratio, a first exposure duration and a first sensitivity; setting an exposure duration and a sensitivity of the camera to the first exposure duration and the first sensitivity, respectively, and capturing an N-frame image.
根据第一方面或者第二方面,在一种可能的设计中,在获得N帧图像之前,方法还包括:检测到以下三种情形同时存在时,产生控制信号,所述控制信号用于指示获取N帧图像;情形1:检测到相机的取景图像为运动图像;情形2:检测到相机的当前曝光时长超过安全时长;情形3:检测到相机处于中度高亮场景中,相应的,相机的当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。其中,作为实施例的补充,上述3种情形可以检测到至少一个即可产生相应的控制信号。例如检测到情形3,表示中度高亮条件,则智能切换到下文提到的第二抓拍模式或第三抓拍模式,产生控制信号用第二抓拍模式或第三抓拍模式的方式获取N帧图片。检测上述情形的方法可以由检测模块执行。According to the first aspect or the second aspect, in a possible design, before obtaining the N frame image, the method further comprises: when detecting that the following three situations exist simultaneously, generating a control signal, the control signal is used to indicate acquisition N frame image; Case 1: The viewfinder image of the camera is detected as a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe duration; Case 3: The camera is detected to be in a moderately bright scene, correspondingly, the camera The current sensitivity is in a first preset threshold interval, and the current exposure duration is in a second predetermined threshold interval. In addition, as a supplement to the embodiment, the above three situations can detect at least one to generate a corresponding control signal. For example, if the situation 3 is detected, indicating a medium highlight condition, the smart switch to the second capture mode or the third capture mode mentioned below, and the control signal is generated to acquire the N frame image in the second capture mode or the third capture mode. . The method of detecting the above situation can be performed by the detection module.
根据第一方面或者第二方面,在一种可能的设计中,即第二抓拍模式,获得N帧图像包括:保持相机当前的感光度和曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第二曝光时长和第二感光度;将相机的曝光时长和感光度分别设置为第二曝光时长和第二感光度,拍摄N帧图像;方法还包括:按照相机当前的感光度和曝光时长拍摄一帧第一新图像;根据第一目标图像和第一新图像得到第二目标图像。该技术方案可以用于纯粹的抓拍模式,无需对当前拍照环境进行判定。According to the first aspect or the second aspect, in one possible design, that is, the second capture mode, obtaining the N frame image includes: keeping the product of the current sensitivity of the camera and the exposure duration constant, and decreasing the exposure duration according to the preset ratio. And increasing the sensitivity, obtaining the second exposure duration and the second sensitivity; setting the exposure duration and the sensitivity of the camera to the second exposure duration and the second sensitivity respectively, and taking N frames of images; the method further comprises: pressing the current camera The first new image of one frame is captured by the sensitivity and the exposure duration; the second target image is obtained according to the first target image and the first new image. This technical solution can be used in a pure capture mode without the need to make a decision on the current photographing environment.
根据第一方面或者第二方面,在一种可能的设计中,即第二抓拍模式,所述根据所述第一目标图像和所述第一新图像得到第二目标图像包括:将所述第一新图像与所述参考图像或者第一目标图像进行配准,得到第一(二)配准图像;根据所述第一(二)配准图像与所述第一目标图像得到第一(二)差异图像;根据所述第一(二)差异图像得到第一(二)鬼影权重图像;根据所述第一(二)鬼影权重图像,将所述第一(二)配准图像与所述第一目标图像进行融合,得到第一(二)去鬼影图像;根据所述第一(二)去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。According to the first aspect or the second aspect, in a possible design, that is, the second capture mode, the obtaining the second target image according to the first target image and the first new image comprises: A new image is registered with the reference image or the first target image to obtain a first (two) registration image; and the first (two) registration image and the first target image are obtained according to the first (two) a difference image; obtaining a first (two) ghost weight image according to the first (two) difference image; and the first (two) registration image according to the first (two) ghost weight image The first target image is fused to obtain a first (two) de-ghost image; and the first (two) de-ghost image and the first target image are subjected to weighted fusion of pixel values to obtain the first Two target images.
根据第一方面或者第二方面,在一种可能的设计中,即第三抓拍模式,获得N帧图像包括:保持相机当前的感光度不变,将当前曝光时长设置为更低的第三曝光时长;并拍摄N帧图像;所述方法还包括:按照所述相机当前的感光度和曝光时长拍摄一帧第二新图像;根据所述第一目标图像和所述第二新图像得到第三目标图像。该技术方 案可以用于纯粹的抓拍模式,无需对当前拍照环境进行判定。According to the first aspect or the second aspect, in one possible design, that is, the third capture mode, obtaining the N frame image includes: keeping the current sensitivity of the camera unchanged, and setting the current exposure duration to a lower third exposure. And capturing N frames of images; the method further comprising: capturing a second new image according to the current sensitivity and the exposure duration of the camera; obtaining a third according to the first target image and the second new image Target image. This technical solution can be used in a pure capture mode without the need to make a decision on the current photographing environment.
根据第一方面或者第二方面,在一种可能的设计中,即第三抓拍模式,根据所述第一目标图像和所述第二新图像得到第三目标图像包括:根据第二新图像,对所述第一目标图像按照预设的亮度校正算法进行处理得到第四目标图像;将所述第二新图像与所述参考图像或者所述第四目标图像进行配准,得到第三(四)配准图像;根据所述第三(四)配准图与所述所述第四目标图像得到第三(四)差异图像;根据所述第三(四)差异图像得到第三(四)鬼影权重图像;根据所述第三(四)鬼影权重图像,将所述第三(四)配准图像与所述第四目标图像进行融合,得到第三(四)去鬼影图像;根据所述第三(四)去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第五(六)目标图像;对所述第五(六)目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。According to the first aspect or the second aspect, in a possible design, that is, the third snap mode, obtaining the third target image according to the first target image and the second new image includes: according to the second new image, Processing the first target image according to a preset brightness correction algorithm to obtain a fourth target image; and registering the second new image with the reference image or the fourth target image to obtain a third (four a registration image; obtaining a third (four) difference image according to the third (four) registration map and the fourth target image; obtaining a third (four) according to the third (four) difference image Ghost weighting image; merging the third (four) registration image with the fourth target image according to the third (four) ghost weight image to obtain a third (four) de-ghost image; Performing weighted fusion of pixel values according to the third (four) de-ghost image and the fourth target image to obtain a fifth (six) target image; and the fifth (six) target image and the first The target image is subjected to pyramid fusion processing to obtain the third target image.
更具体地,上述了可能的技术实现可以由处理器调用存储器中的程序与指令进行相应的运算处理。More specifically, the above-mentioned possible technical implementations may be processed by the processor in response to programs and instructions in the memory.
根据第一方面或者第二方面,在一种可能的设计中,用户根据自己的选择直接进入抓拍模式,如上文中会提到的第一抓拍模式或者第二抓拍模式或者第三抓拍模式;这时,终端无需对取景环境进行检测,因为每一个抓拍模式都会有个预先设定的参数规则(预先存储在终端本地或者云端服务器),即每一个抓拍模式都会有对应的感光度和曝光时长,当然还可以包括有其他的性能参数等;一旦进入到特定的抓拍模式,相机会自动地调整到对应的感光度和对应的曝光时长进行拍摄。因此如果用户直接采用了抓拍模式,则所述获取N张图片就会采用对应的感光度和对应的曝光时长拍摄N张图片,以进行相应模式的后续图像处理。According to the first aspect or the second aspect, in a possible design, the user directly enters the capture mode according to his own choice, such as the first capture mode or the second capture mode or the third capture mode mentioned above; The terminal does not need to detect the framing environment, because each capture mode has a preset parameter rule (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course. Other performance parameters, etc. can also be included; once entering a particular capture mode, the camera automatically adjusts to the corresponding sensitivity and corresponding exposure duration for shooting. Therefore, if the user directly adopts the snap mode, the N pictures are taken to take N pictures with corresponding sensitivity and corresponding exposure time to perform subsequent image processing in the corresponding mode.
在第一抓拍模式下,一种可能的设计方式中,拍照的动作可以由用户按下快门键进行触发。一种情形下,用户按下快门之前,相机当前的感光度和当前曝光时长就已经被调整设置为第一曝光时长和第一感光度,用户按下快门时,便以第一曝光时长和第一感光度拍摄N张图片进行后续处理;另一种情形下,用户按下快门之前,相机依旧保持所述当前的感光度和当前曝光时长,当用户按下快门时,相机当前的感光度和当前曝光时长被调整设置为第一曝光时长和第一感光度,并以第一曝光时长和第一感光度拍摄N张图片进行后续处理。此外,预览图像数据流中即可以以所述当前的感光度和当前曝光时长的状态进行显示图像,也可以以所述第一曝光时长和第一感光度的状态进行显示图像。In the first snap mode, in one possible design mode, the action of photographing can be triggered by the user pressing the shutter button. In one case, before the user presses the shutter, the current sensitivity of the camera and the current exposure duration have been adjusted to the first exposure duration and the first sensitivity. When the user presses the shutter, the first exposure duration and the first exposure time N photos are taken at a sensitivity for subsequent processing; in another case, the camera still maintains the current sensitivity and the current exposure time before the user presses the shutter, and the current sensitivity of the camera when the user presses the shutter. The current exposure duration is adjusted to be set to the first exposure duration and the first sensitivity, and N pictures are taken for the first exposure duration and the first sensitivity for subsequent processing. Further, in the preview image data stream, the image may be displayed in the state of the current sensitivity and the current exposure time, or the image may be displayed in the state of the first exposure time and the first sensitivity.
在第二抓拍模式下,一种可能的设计方式中,拍照的动作可以由用户按下快门键进行触发。一种情形下,当用户按下快门时,便以所述当前的感光度和当前曝光时长获得一帧第一新图像,并将所述当前的感光度和当前曝光时长调整设置为第二曝光时长和第二感光度并在该条件下拍摄N张图片,共得到N+1张图片,以进行后续处理。另一种情形下,当用户按下快门时,便将当前的感光度和当前曝光时长设置为第二曝光时长和第二感光度,并在该条件下拍摄N张图片,然后再恢复到所述当前的感光度和当前曝光时长的条件下获得一帧第一新图像;共得到N+1张图片,以进行后续处理。此外,预览图像数据流中即可以以所述当前的感光度和当前曝光时长的状态进行显示图像,也可以以所述第二曝光时长和第二感光度的状态进行显示图像。In the second snap mode, in one possible design mode, the action of photographing can be triggered by the user pressing the shutter button. In one case, when the user presses the shutter, a first new image of the frame is obtained with the current sensitivity and the current exposure duration, and the current sensitivity and the current exposure duration adjustment are set to the second exposure. The duration and the second sensitivity were taken and N pictures were taken under the conditions, and a total of N+1 pictures were obtained for subsequent processing. In another case, when the user presses the shutter, the current sensitivity and the current exposure duration are set to the second exposure duration and the second sensitivity, and N pictures are taken under the condition, and then restored to the location. A first new image of one frame is obtained under the condition of the current sensitivity and the current exposure duration; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be displayed in the state of the second exposure duration and the second sensitivity.
在第三抓拍模式下,一种可能的设计方式中,拍照的动作可以由用户按下快门键进行触发。一种情形下,当用户按下快门时,便以当前的感光度和当前曝光时长获得一帧第二新图像;并保持相机当前的感光度不变,将当前曝光时长设置为更低的第三曝光时长,并在该条件下拍摄N张图片,共得到N+1张图片,以进行后续处理。另一种情形下,当用户按下快门时,便将当前曝光时长设置为更低的第三曝光时长,并在该条件下拍摄N张图片,然后再恢复到所述当前曝光时长在当前感光度的条件下获得一帧第二新图像;共得到N+1张图片,以进行后续处理。此外,预览图像数据流中即可以以所述当前的感光度和当前曝光时长的状态进行显示图像,也可以以所述第三曝光时长和所述当前感光度的状态进行显示图像。In the third snap mode, in one possible design mode, the action of photographing can be triggered by the user pressing the shutter button. In one case, when the user presses the shutter, the second new image of one frame is obtained with the current sensitivity and the current exposure duration; and the current sensitivity of the camera is kept unchanged, and the current exposure duration is set to be lower. Three exposure durations, and N pictures were taken under this condition, and a total of N+1 pictures were obtained for subsequent processing. In another case, when the user presses the shutter, the current exposure duration is set to a lower third exposure duration, and N pictures are taken under the condition, and then the current exposure time is restored to the current sensitivity. A second new image of one frame is obtained under the condition of degree; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be performed in the state of the third exposure duration and the current sensitivity.
第三方面,本发明实施例提供一种终端设备,终端设备包含存储器、处理器、总线、摄像头,所述存储器、所述摄像头以及所述处理器通过所述总线相连;其中,摄像头用于在所述处理器的控制下采集图像信号;存储器用于存储计算机程序和指令;处理器用于调用所述存储器中存储的所述计算机程序和指令,使所述终端设备执行如上述任何一种可能的设计方法。In a third aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a memory, a processor, a bus, and a camera, where the memory, the camera, and the processor are connected by using the bus; wherein, the camera is used to Acquiring an image signal under control of the processor; storing a computer program and instructions; the processor is configured to invoke the computer program and instructions stored in the memory, to cause the terminal device to perform any of the above possibilities Design method.
根据第三方面,在一种可能的设计中,终端设备还包括天线***、天线***在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、4G网络、FDMA、TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。According to the third aspect, in a possible design, the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network; the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, 4G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
上述方法、装置与设备既可以应用于终端自带的拍照软件进行拍摄的场景;也可以应用于终端中运行第三方拍照软件进行拍摄的场景;拍摄包括普通拍摄,自拍,以及视频电话、视频会议、VR拍摄、航拍等多种拍摄方式。The above method, device and device can be applied to a scene in which the camera software provided by the terminal is used for shooting; or can be applied to a scene in which a third-party camera software is used for shooting in the terminal; the shooting includes normal shooting, self-timer, video telephony, and video conference. , VR shooting, aerial photography and other shooting methods.
通过上述方案,本发明的实施例中终端,可以包含多种拍照模式,如单纯的抓拍模式,或者根据场景条件检测后决定是否进行抓拍的只能拍照模式;终端处于抓拍模式时,对于运动场景、或者信噪比较高等不易拍出清晰照片的场景,本方案已经能够拍出高清晰度的照片,大大提高用户的拍照体验。Through the foregoing solution, the terminal in the embodiment of the present invention may include multiple camera modes, such as a simple capture mode, or a camera-only mode that determines whether to capture after the scene condition is detected; when the terminal is in the capture mode, for the motion scene Or, if the signal-to-noise ratio is high, it is difficult to take a clear photo scene. This program has been able to take high-definition photos, greatly improving the user's photo experience.
附图说明DRAWINGS
图1为一种终端的结构示意图;1 is a schematic structural view of a terminal;
图2为本发明实施例中一种图像处理方法的流程图;2 is a flowchart of an image processing method according to an embodiment of the present invention;
图3为本发明实施例中一种对图像去鬼影方法的流程图;3 is a flowchart of a method for de-ghosting an image according to an embodiment of the present invention;
图4为本发明实施例中一种抓拍***流程图;4 is a flowchart of a capture system according to an embodiment of the present invention;
图5为本发明实施例中另一种图像处理方法示意图;FIG. 5 is a schematic diagram of another image processing method according to an embodiment of the present invention; FIG.
图6为本发明实施例中另一种图像处理方法示意图;FIG. 6 is a schematic diagram of another image processing method according to an embodiment of the present invention; FIG.
图7为本发明实施例中的一种图像处理装置的结构示意图。FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完 整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,并不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明实施例中,终端,可以是向用户提供拍照和/或数据连通性的设备,具有无线连接功能的手持式设备、或连接到无线调制解调器的其他处理设备,比如:数码相机、单反相机、移动电话(或称为“蜂窝”电话),可以是便携式、袖珍式、手持式、可穿戴设备(如智能手表等)、平板电脑、个人电脑(PC,Personal Computer)、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、无人机、航拍器等。In the embodiment of the present invention, the terminal may be a device that provides photographing and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to the wireless modem, such as a digital camera, a SLR camera, Mobile phones (or "cellular" phones) can be portable, pocket-sized, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PCs, Personal Computers), PDAs (Personal Digital Assistants, Personal digital assistant), POS (Point of Sales), on-board computer, drone, aerial camera, etc.
图1示出了终端100的一种可选的硬件结构示意图。FIG. 1 shows an alternative hardware structure diagram of the terminal 100.
参考图1所示,终端100可以包括射频单元110、存储器120、输入单元130、显示单元140、摄像头150、音频电路160、扬声器161、麦克风162、处理器170、外部接口180、电源190等部件,在本发明实施例中,所述摄像头150至少存在两个。Referring to FIG. 1, the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, a microphone 162, a processor 170, an external interface 180, a power supply 190, and the like. In the embodiment of the present invention, the camera 150 has at least two.
摄像头150用于采集图像或视频,可以通过应用程序指令触发开启,实现拍照或者摄像功能。摄像头可以包括成像镜头,滤光片,图像传感器,对焦防抖马达等部件。物体发出或反射的光线进入成像镜头,通过滤光片,最终汇聚在图像传感器上。成像镜头主要是用于对拍照视角中的所有物体(也可称为待拍摄对象)发出或反射的光汇聚成像;滤光片主要是用于将光线中的多余光波(例如除可见光外的光波,如红外)滤去;图像传感器主要是用于对接收到的光信号进行光电转换,转换成电信号,并输入到处理170进行后续处理。The camera 150 is used for capturing images or videos, and can be triggered by an application instruction to realize a photographing or photographing function. The camera may include an imaging lens, a filter, an image sensor, a focus anti-shake motor, and the like. The light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor. The imaging lens is mainly used for collecting and reflecting light emitted or reflected by all objects (also referred to as objects to be photographed) in the photographing angle of view; the filter is mainly used to remove unnecessary light waves in the light (for example, light waves other than visible light) The image sensor is mainly used for photoelectrically converting the received optical signal, converting it into an electrical signal, and inputting it to the processing 170 for subsequent processing.
本领域技术人员可以理解,图2仅仅是便携式多功能装置的举例,并不构成对便携式多功能装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。It will be understood by those skilled in the art that FIG. 2 is merely an example of a portable multi-function device, and does not constitute a limitation of the portable multi-function device, and may include more or less components than those illustrated, or may combine some components, or different. Parts.
所述输入单元130可用于接收输入的数字或字符信息,以及产生与所述便携式多功能装置的用户设置以及功能控制有关的键信号输入。具体地,输入单元130可包括触摸屏131以及其他输入设备132。所述触摸屏131可收集用户在其上或附近的触摸操作(比如用户使用手指、关节、触笔等任何适合的物体在触摸屏上或在触摸屏附近的操作),并根据预先设定的程序驱动相应的连接装置。触摸屏可以检测用户对触摸屏的触摸动作,将所述触摸动作转换为触摸信号发送给所述处理器170,并能接收所述处理器170发来的命令并加以执行;所述触摸信号至少包括触点坐标信息。所述触摸屏131可以提供所述终端100和用户之间的输入界面和输出界面。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触摸屏。除了触摸屏131,输入单元130还可以包括其他输入设备。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键132、开关按键133等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the portable multifunction device. Specifically, the input unit 130 may include a touch screen 131 and other input devices 132. The touch screen 131 can collect touch operations on or near the user (such as the user's operation on the touch screen or near the touch screen using any suitable object such as a finger, a joint, a stylus, etc.), and drive the corresponding according to a preset program. Connection device. The touch screen can detect a user's touch action on the touch screen, convert the touch action into a touch signal and send the signal to the processor 170, and can receive and execute a command sent by the processor 170; the touch signal includes at least a touch Point coordinate information. The touch screen 131 can provide an input interface and an output interface between the terminal 100 and a user. In addition, touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch screen 131, the input unit 130 may also include other input devices. Specifically, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control button 132, switch button 133, etc.), trackball, mouse, joystick, and the like.
所述显示单元140可用于显示由用户输入的信息或提供给用户的信息以及终端100的各种菜单。在本发明实施例中,显示单元还用于显示设备利用摄像头150获取到的图像,包括预览图像、拍摄的初始图像以及拍摄后经过一定算法处理后的目标图像。The display unit 140 can be used to display information input by a user or information provided to a user and various menus of the terminal 100. In the embodiment of the present invention, the display unit is further configured to display an image acquired by the device using the camera 150, including a preview image, an initial image captured, and a target image processed by a certain algorithm after the shooting.
进一步的,触摸屏131可覆盖显示面板141,当触摸屏131检测到在其上或附近的触摸操作后,传送给处理器170以确定触摸事件的类型,随后处理器170根据触摸事件的类型在显示面板141上提供相应的视觉输出。在本实施例中,触摸屏与显示单元可以集成为一个部件而实现终端100的输入、输出、显示功能;为便于描述,本发明实施例以触摸显示屏代表触摸屏和显示单元的功能集合;在某些实施例中,触摸屏与显示单元也可以作为两个独立的部件。Further, the touch screen 131 may cover the display panel 141. When the touch screen 131 detects a touch operation on or near it, the touch screen 131 transmits to the processor 170 to determine the type of the touch event, and then the processor 170 displays the panel according to the type of the touch event. A corresponding visual output is provided on 141. In this embodiment, the touch screen and the display unit can be integrated into one component to implement the input, output, and display functions of the terminal 100. For convenience of description, the touch display screen represents the function set of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also function as two separate components.
所述存储器120可用于存储指令和数据,存储器120可主要包括存储指令区和存储数据区,存储数据区可存储关节触摸手势与应用程序功能的关联关系;存储指令区可存储操作***、应用、至少一个功能所需的指令等软件单元,或者他们的子集、扩展集。还可以包括非易失性随机存储器;向处理器170提供包括管理计算处理设备中的硬件、软件以及数据资源,支持控制软件和应用。还用于多媒体文件的存储,以及运行程序和应用的存储。The memory 120 can be used to store instructions and data, the memory 120 can mainly include a storage instruction area and a storage data area, the storage data area can store an association relationship between the joint touch gesture and the application function; the storage instruction area can store an operating system, an application, Software units such as instructions required for at least one function, or their subsets, extension sets. A non-volatile random access memory can also be included; providing hardware, software, and data resources in the management computing device to the processor 170, supporting the control software and applications. Also used for the storage of multimedia files, as well as the storage of running programs and applications.
处理器170是终端100的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的指令以及调用存储在存储器120内的数据,执行终端100的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器170可包括一个或多个处理单元;优选的,处理器170可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器170中。在一些实施例中,处理器、存储器、可以在单一芯片上实现,在一些实施例中,他们也可以在独立的芯片上分别实现。处理器170还可以用于产生相应的操作控制信号,发给计算处理设备相应的部件,读取以及处理软件中的数据,尤其是读取和处理存储器120中的数据和程序,以使其中的各个功能模块执行相应的功能,从而控制相应的部件按指令的要求进行动作。The processor 170 is a control center of the terminal 100, and connects various parts of the entire mobile phone by various interfaces and lines, and executes various kinds of the terminal 100 by operating or executing an instruction stored in the memory 120 and calling data stored in the memory 120. Function and process data to monitor the phone as a whole. Optionally, the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like. The modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170. In some embodiments, the processors, memories, can be implemented on a single chip, and in some embodiments, they can also be implemented separately on separate chips. The processor 170 can also be configured to generate corresponding operational control signals, send to corresponding components of the computing processing device, read and process data in the software, and in particular read and process the data and programs in the memory 120 to enable Each function module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
所述射频单元110可用于收发信息或通话过程中信号的接收和发送,特别地,将基站的下行信息接收后,给处理器170处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,射频单元110还可以通过无线通信与网络设备和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。The radio frequency unit 110 can be used for receiving and transmitting signals during transmission and reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is performed by the processor 170. In addition, the uplink data is designed to be sent to the base station. Generally, RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the radio unit 110 can also communicate with network devices and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
音频电路160、扬声器161、麦克风162可提供用户与终端100之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,麦克风162用于收集声音信号,还可以将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器170处理后,经射频单元110以发送给比如另一终端,或者将音频数据输出至存储器120以便进一步处理,音频电路也可以包括耳机插孔163,用于提供音频电 路和耳机之间的连接接口。The audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the terminal 100. The audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 is used to collect the sound signal, and can also convert the collected sound signal. The electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, transmitted to the terminal, for example, via the radio frequency unit 110, or outputted to the memory 120 for further processing. The audio circuit can also include a headphone jack 163 for providing a connection interface between the audio circuit and the earphone.
终端100还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理***与处理器170逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。The terminal 100 also includes a power source 190 (such as a battery) for powering various components. Preferably, the power source can be logically coupled to the processor 170 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
终端100还包括外部接口180,所述外部接口可以是标准的Micro USB接口,也可以使多针连接器,可以用于连接终端100与其他装置进行通信,也可以用于连接充电器为终端100充电。The terminal 100 further includes an external interface 180, which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
尽管未示出,终端100还可以包括闪光灯、无线保真(wireless fidelity,WiFi)模块、蓝牙模块、各种传感器等,在此不再赘述。下文中描述的全部方法均可以应用在图1所示的终端中。Although not shown, the terminal 100 may further include a flash, a wireless fidelity (WiFi) module, a Bluetooth module, various sensors, and the like, and details are not described herein. All of the methods described below can be applied to the terminal shown in FIG. 1.
参阅图2所示,本发明实施例提供一种图像处理方法,具体处理方法流程包括如下步骤:Referring to FIG. 2, an embodiment of the present invention provides an image processing method. The specific processing method includes the following steps:
步骤31:获得N帧图像,N为大于2的正整数;Step 31: Obtain an N frame image, where N is a positive integer greater than 2;
步骤32:在所述N帧图像中确定一个参考图像,其余N-1帧图像为待处理图像;如N为20,第一帧图像为参考图像,其余的19帧图像为待处理图像,步骤33中的i可以是1-19中的任意一个;Step 32: Determine a reference image in the N frame image, and the remaining N-1 frame images are to be processed images; if N is 20, the first frame image is a reference image, and the remaining 19 frames are images to be processed. i in 33 may be any one of 1-19;
步骤33:根据所述N-1帧待处理图像得到N-1帧去鬼影图像;具体地,可以对N-1帧中的第i帧执行步骤s331-s334;其中i可以取遍不大于N-1的所有正整数,在一些实施例中,也可以只取其中的M帧待处理图像得到M帧去鬼影图像,M为小于N-1的正整数;仍以N-1进行说明,参见图3;Step 33: Obtain an N-1 frame de-ghost image according to the N-1 frame to be processed image; specifically, step s331-s334 may be performed on the ith frame in the N-1 frame; wherein i may be taken no more than For all positive integers of N-1, in some embodiments, only the M frames to be processed may be taken to obtain an M frame de ghost image, and M is a positive integer smaller than N-1; , see Figure 3;
s331:将第i帧图像与所述参考图像进行配准,得到第i配准图像;S331: register an ith frame image and the reference image to obtain an i-th registration image;
s332:根据第i配准图像与参考图像得到第i差异图像;S332: Obtain an ith difference image according to the i-th registration image and the reference image;
s333:根据第i差异图像得到第i鬼影权重图像;S333: obtaining an i-th ghost weight image according to the ith difference image;
s334:根据第i鬼影权重图像,将第i配准图像与参考图像进行融合,得到第i帧去鬼影图像;S334: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image;
步骤34:根据所述参考图像和所述N-1帧去鬼影图像得到第一目标图像;具体地,对参考图像和N-1帧去鬼影图像进行均值运算得到第一目标图像,均值运算也可以包含对平均值的一些修正,或者是绝对值的平均值等。Step 34: Obtain a first target image according to the reference image and the N-1 frame de-ghost image; specifically, performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image, and an average value The operation can also include some corrections to the average, or an average of the absolute values, and the like.
如果终端的相机直接处于抓拍模式,那么步骤31在当前参数设置下,接收到拍摄指令,连续拍摄N张图片即可,并可以作为下文中第一抓拍模式、第二抓拍模式、第三抓拍模式中步骤31的替换方式。具体地,用户根据自己的选择直接进入抓拍模式,如下文中会提到的第一抓拍模式或者第二抓拍模式或者第三抓拍模式;这时,终端无需对取景环境进行检测,因为每一个抓拍模式都会有个预先设定的参数规则(预先存储在终端本地或者云端服务器),即每一个抓拍模式都会有对应的感光度和曝光时长,当然还可以包括有其他的性能参数等;一旦进入到特定的抓拍模式,相机会自动地调整到对应的感光度和对应的曝光时长进行拍摄。因此如果用户直接采用了抓拍模式,则所述获取N张图片就会采用该抓拍模式对应的感光度和对应的曝光时长拍摄N张图片,以进行相应模式的后续图像处理。If the camera of the terminal is directly in the capture mode, step 31 receives the shooting instruction under the current parameter setting, and continuously takes N pictures, and can be used as the first capture mode, the second capture mode, and the third capture mode. An alternative to step 31 in the middle. Specifically, the user directly enters the capture mode according to his own choice, as the first capture mode or the second capture mode or the third capture mode mentioned in the following; at this time, the terminal does not need to detect the framing environment, because each capture mode There will be a preset parameter rule (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course, may include other performance parameters, etc.; The capture mode, the camera will automatically adjust to the corresponding sensitivity and the corresponding exposure time to shoot. Therefore, if the user directly adopts the capture mode, the N pictures are taken, and the N pictures are taken by the sensitivity corresponding to the capture mode and the corresponding exposure time to perform subsequent image processing in the corresponding mode.
如果终端的相机处于自动模式或是智能模式,这时相机需要进行对取景环境进行 检测,如果检测到相机的取景图像为运动图像;且检测到相机的当前曝光时长超过安全时长;且检测到取景环境为极高亮环境下;则采用本发明中提出的第一种抓拍模式。如果检测到相机的取景图像为运动图像;且检测到相机的当前曝光时长超过安全时长;且检测到取景环境为中等高亮环境下;则采用本发明中提出的第二种抓拍模式或者第三种抓拍模式。如果以上任意情景都没有检测到,则可以采用任意一种终端支持的拍照模式进行拍摄。一种具体的拍照流程可以参见图4。If the camera of the terminal is in the automatic mode or the smart mode, the camera needs to detect the framing environment, if the framing image of the camera is detected as a moving image; and the current exposure time of the camera is detected to exceed the safe time; and the framing is detected; The environment is extremely bright; the first capture mode proposed in the present invention is adopted. If the framing image of the camera is detected as a moving image; and the current exposure time of the camera is detected to exceed the safe duration; and the framing environment is detected to be in a medium-high brightness environment, the second capture mode or the third proposed in the present invention is adopted. Capture mode. If none of the above scenarios are detected, you can use any of the camera-supported camera modes to shoot. A specific photographing process can be seen in Figure 4.
其中,本文中的“相机”泛指终端设备中能够完成拍照功能的***,包括摄像头、以及必要的处理模块和存储模块,以完成图像的获取、传输,还可以包含一些处理功能模块。The "camera" in this document generally refers to a system capable of performing a photographing function in a terminal device, including a camera, and a necessary processing module and a storage module to complete image acquisition and transmission, and may also include some processing function modules.
其中,“当前的曝光时长”、“当前的感光度”分别是指相机在初始条件下预览取景图像的数据流时所对应的曝光时长和感光度。通常与相机自身属性以及初始设置有关。在一种可能的设计中,如终端没有对相机的取景环境进行检测,或者对取景环境进行检测却检测不到以下三者中的任意一种情形时,相机预览取景图像数据流时所对应的曝光时长和感光度也属于“当前的曝光时长”、“当前的感光度”。情形1:相机的取景图像为运动图像;情形2:检测到相机的当前曝光时长超过安全时长;情形3::检测到取景环境为极高亮环境或中度高亮环境。The “current exposure duration” and “current sensitivity” respectively refer to the exposure duration and sensitivity corresponding to the preview of the data stream of the framing image under initial conditions. Usually related to the camera's own properties and initial settings. In a possible design, if the terminal does not detect the camera's framing environment, or detects the framing environment but does not detect any of the following three situations, the camera previews the framing image data stream corresponding to The exposure duration and sensitivity are also "current exposure duration" and "current sensitivity". Case 1: The view image of the camera is a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe time; Case 3: The framing environment is detected as a very bright environment or a moderately bright environment.
其中,检测取景图像为运动图像的方式有很多种,例如对预览数据流进行运动检测,分析拍照预览流,每隔x帧(间隔帧数x可调,x为正整数)检测一次,每次检测时对比当前检测帧图像与上一次检测帧图像之间的差异。具体可以将两个图像采用相同的划分方式分别分成若干区域,如每个图像64个区域,若出现一个或一个以上区域存在较大差异,则视为运动场景。Among them, there are many ways to detect the framing image as a moving image, for example, performing motion detection on the preview data stream, analyzing the photo preview stream, and detecting each time x frames (the number of interval frames x is adjustable, x is a positive integer), each time The difference between the current detected frame image and the last detected frame image is compared at the time of detection. Specifically, the two images may be divided into several regions by the same division manner, for example, 64 regions per image, and if there is a large difference between one or more regions, it is regarded as a motion scene.
其中,当前曝光时间和安全快门是可以通过获取相机参数来获取的。安全快门是终端相机的一种属性。一般说来,当前曝光时间大于安全快门才会考虑采用抓拍模式。Among them, the current exposure time and the safety shutter can be obtained by acquiring camera parameters. A secure shutter is a property of a terminal camera. In general, the current exposure time is greater than the safety shutter will consider the capture mode.
其中,检测拍摄环境的光照强度很有必要,环境亮度越高,图像的清晰度也相对越高,需要后续的处理就会越简单。极高亮场景定义:预览图的感光度(记为ISO)和曝光时长(记为expo)小于阈值,即ISO<iso_th1,且expo<expo_th1。预览图的ISO和expo可以通过获取相机参数来获取,iso_th1和expo_th1可根据用户具体需求决定;中高亮场景定义:iso_th1≤ISO<iso_th2,且expo_th1≤expo<expo_th2,同理iso_th2和expo_th2也可根据用户具体需求决定;低亮场景定义:iso_th2≤ISO且expo_th2≤expo;应理解,这些区间的划分是由用户需求确定的,这些取值区间之间允许存在不连续或者重合的情况。Among them, it is necessary to detect the illumination intensity of the shooting environment. The higher the ambient brightness, the higher the resolution of the image, and the easier it is to perform subsequent processing. Extremely bright scene definition: The sensitivity of the preview (marked as ISO) and the exposure duration (denoted as expo) are less than the threshold, ie ISO<iso_th1, and expo<expo_th1. The ISO and expo of the preview image can be obtained by obtaining the camera parameters. The iso_th1 and expo_th1 can be determined according to the specific needs of the user; the medium highlight scene definition: iso_th1≤ISO<iso_th2, and expo_th1≤expo<expo_th2, the same iso_th2 and expo_th2 can also be based on The user's specific needs are determined; the low-light scene definition: iso_th2 ≤ ISO and expo_th2 ≤ expo; it should be understood that the division of these intervals is determined by the user's needs, and there are cases where discontinuities or coincidences are allowed between these value intervals.
下面针对第一抓拍模式、第二抓拍模式和第三抓拍模式进行详细说明。The first capture mode, the second capture mode, and the third capture mode are described in detail below.
第一抓拍模式First capture mode
抓拍模式一的流程图可参见图2和图3。A flow chart of the capture mode one can be seen in Figures 2 and 3.
步骤31具体为:获取相机当前的感光度和当前曝光时长等参数,保持相机当前的感光度和曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度,如第一曝光时长为原曝光时长的1/2或1/4,而第一感光度相应为原感光度的2倍或4倍,具体的比例可以根据用户的需求或者设定规则进行调整;将相机的曝光时长和感光度分别设置为所述第一曝光时长和所述第一感光度,拍 摄N帧图像。下面的步骤是对这N帧进行降噪处理。 Step 31 is specifically: obtaining parameters such as the current sensitivity of the camera and the current exposure duration, keeping the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration according to the preset ratio, and increasing the sensitivity to obtain the first exposure duration and The first sensitivity, such as the first exposure time is 1/2 or 1/4 of the original exposure time, and the first sensitivity is corresponding to 2 or 4 times the original sensitivity, and the specific ratio may be set according to the user's needs or The rule is adjusted; the exposure time and the sensitivity of the camera are set to the first exposure time length and the first sensitivity, respectively, and N frames of images are taken. The following steps are to perform noise reduction on the N frames.
一种可能的设计方式中,拍照的动作可以由用户按下快门键进行触发。一种情形下,用户按下快门之前,相机当前的感光度和当前曝光时长就已经被调整设置为第一曝光时长和第一感光度,用户按下快门时,便以第一曝光时长和第一感光度拍摄N张图片进行后续处理;另一种情形下,用户按下快门之前,相机依旧保持所述当前的感光度和当前曝光时长,当用户按下快门时,相机当前的感光度和当前曝光时长被调整设置为第一曝光时长和第一感光度,并以第一曝光时长和第一感光度拍摄N张图片进行后续处理。此外,预览图像数据流中即可以以所述当前的感光度和当前曝光时长的状态进行显示图像,也可以以所述第一曝光时长和第一感光度的状态进行显示图像。In one possible design, the action of taking a picture can be triggered by the user pressing the shutter button. In one case, before the user presses the shutter, the current sensitivity of the camera and the current exposure duration have been adjusted to the first exposure duration and the first sensitivity. When the user presses the shutter, the first exposure duration and the first exposure time N photos are taken at a sensitivity for subsequent processing; in another case, the camera still maintains the current sensitivity and the current exposure time before the user presses the shutter, and the current sensitivity of the camera when the user presses the shutter. The current exposure duration is adjusted to be set to the first exposure duration and the first sensitivity, and N pictures are taken for the first exposure duration and the first sensitivity for subsequent processing. Further, in the preview image data stream, the image may be displayed in the state of the current sensitivity and the current exposure time, or the image may be displayed in the state of the first exposure time and the first sensitivity.
步骤32具体为:在所述N帧图像中确定一个参考图像,其余N-1帧图像为待处理图像。例如,取这N帧图像中的第一帧图像或者中间某一帧图像作为参考图像。后面的步骤以第一帧图像为例进行说明。 Step 32 is specifically: determining one reference image in the N frame image, and the remaining N-1 frame images are to be processed images. For example, the first frame image or the middle frame image of the N frame images is taken as a reference image. The subsequent steps are described by taking the first frame image as an example.
步骤33具体为:根据所述N-1帧待处理图像得到N-1帧去鬼影图像。这一步骤里又可以细分为很多子步骤。可以对其余N-1帧中的第i帧执行步骤s331-s334;其中i可以取遍不大于N-1的所有正整数,在具体实现过程中,也可以取局部帧得到局部帧的去鬼影图像,为了方便说明,本实施例中以N-1帧中的全部帧得到去鬼影图像进行说明。 Step 33 is specifically: obtaining an N-1 frame de-ghost image according to the image to be processed of the N-1 frame. This step can be subdivided into many substeps. Step s331-s334 may be performed on the ith frame in the remaining N-1 frames; wherein i may take all positive integers that are not greater than N-1, and in the specific implementation process, the local frame may also be taken to obtain the local frame degugos. For the sake of convenience of explanation, in the present embodiment, the de-ghost image is obtained by all the frames in the N-1 frame.
s331具体为:将第i帧图像与参考图像进行配准,得到第i配准图像。具体的配准方式可以为:(1)对第i帧图像以及参考图像分别按照同样的方式进行特征提取,得到一系列的特征点,并对每个特征点进行特征描述;(2)将第i帧图像与参考图像的特征点进行匹配;得到一系列特征点对,并用ransac算法(现有技术)进行坏点剔除;(3)在匹配得到的特征点对中求解得到两幅图像的变换矩阵(homography矩阵或affine矩阵等),根据变换矩阵,将第i帧图像与参考图像进行配准对齐,得到第i帧的配准图。此步骤现阶段已有成熟的开源算法可以用来调用,故在此不详细展开。S331 is specifically: registering the ith frame image with the reference image to obtain an i-th registration image. The specific registration method may be: (1) performing feature extraction on the i-th frame image and the reference image in the same manner, obtaining a series of feature points, and characterizing each feature point; (2) The i-frame image is matched with the feature points of the reference image; a series of feature point pairs are obtained, and the ransac algorithm (prior art) is used for bad point culling; (3) the two image images are obtained by solving the matched feature point pairs. A matrix (homography matrix, affine matrix, etc.), according to the transformation matrix, the ith frame image is aligned with the reference image to obtain a registration map of the ith frame. At this stage, mature open source algorithms can be used to call this step, so it will not be expanded in detail here.
s332具体为:根据第i配准图像与参考图像得到第i差异图像。具体地,将得到第i帧配准图与参考图像进行逐个像素点求差,根据每个差值取绝对值得到两幅图像的差异图。S332 is specifically: obtaining an ith difference image according to the i-th registration image and the reference image. Specifically, the ith frame registration map and the reference image are obtained for pixel-by-pixel difference, and the difference between the two images is obtained according to the absolute value of each difference.
s333具体为:根据第i差异图像得到第i鬼影权重图像;具体地,差异图中超过预设阈值的像素点置为M(如255),没超过预设阈值的像素点置为N(如0),并对重新赋值后的差异图做高斯平滑,即可得到第i鬼影权重图像。S333 is specifically: obtaining an i-th ghost weight image according to the ith difference image; specifically, a pixel point exceeding a preset threshold in the difference map is set to M (eg, 255), and a pixel point not exceeding a preset threshold is set to N ( For example, 0), and Gaussian smoothing of the re-assigned difference map, the i-th ghost weight image can be obtained.
s334具体为:根据第i鬼影权重图像,将第i配准图像与参考图像进行融合,得到第i帧去鬼影图像。具体地,根据鬼影权重图像(下列公式中的ghost_mask),将第i帧配准图(下列公式中的image_i)与参考图像(下列公式中的image_1)进行逐个像素的融合,即可以得到第i帧去除鬼影图像(no_ghost_mask)。融合公式如下,其中m,n代表像素坐标:S334 is specifically: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain the i-th frame de-ghost image. Specifically, according to the ghost weight image (ghost_mask in the following formula), the i-th frame registration map (image_i in the following formula) and the reference image (image_1 in the following formula) are fused pixel by pixel, that is, the first The i frame removes ghost images (no_ghost_mask). The fusion formula is as follows, where m, n represent pixel coordinates:
Figure PCTCN2018109951-appb-000001
Figure PCTCN2018109951-appb-000001
步骤34具体为:对参考图像和N-1帧去鬼影图像进行均值运算得到第一目标图像。如业界的像素平均算法等。第一目标图像就是终端执行该拍照模式时得到的最终图像。 Step 34 is specifically: performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image. Such as the industry's pixel averaging algorithm. The first target image is the final image obtained when the terminal executes the photographing mode.
第二抓拍模式Second capture mode
第二抓拍模式相对第一抓拍模式较为复杂。其中有部分步骤与抓拍模式1的方法相同。第二抓拍模式的流程图可以参见图5。The second capture mode is more complicated than the first capture mode. Some of the steps are the same as those of the capture mode 1. A flowchart of the second capture mode can be seen in FIG. 5.
步骤41:按照相机当前的感光度和曝光时长拍摄一帧第一新图像;并按照预设比例降低曝光时长并提高感光度,得到第二曝光时长和第二感光度,将相机的曝光时长和感光度分别设置为第二曝光时长和第二感光度,拍摄N帧图像;Step 41: According to the current sensitivity of the camera and the exposure duration, one frame of the first new image is taken; and the exposure time is decreased according to the preset ratio and the sensitivity is increased to obtain the second exposure duration and the second sensitivity, and the exposure duration of the camera is The sensitivity is set to the second exposure duration and the second sensitivity, respectively, and N frames of images are taken;
一种可能的设计方式中,拍照的动作可以由用户按下快门键进行触发。一种情形下,当用户按下快门时,便以所述当前的感光度和当前曝光时长获得一帧第一新图像,并将所述当前的感光度和当前曝光时长调整设置为第二曝光时长和第二感光度并在该条件下拍摄N张图片,共得到N+1张图片,以进行后续处理。另一种情形下,当用户按下快门时,便将当前的感光度和当前曝光时长设置为第二曝光时长和第二感光度,并在该条件下拍摄N张图片,然后再恢复到所述当前的感光度和当前曝光时长的条件下获得一帧第一新图像;共得到N+1张图片,以进行后续处理。此外,预览图像数据流中即可以以所述当前的感光度和当前曝光时长的状态进行显示图像,也可以以所述第二曝光时长和第二感光度的状态进行显示图像。In one possible design, the action of taking a picture can be triggered by the user pressing the shutter button. In one case, when the user presses the shutter, a first new image of the frame is obtained with the current sensitivity and the current exposure duration, and the current sensitivity and the current exposure duration adjustment are set to the second exposure. The duration and the second sensitivity were taken and N pictures were taken under the conditions, and a total of N+1 pictures were obtained for subsequent processing. In another case, when the user presses the shutter, the current sensitivity and the current exposure duration are set to the second exposure duration and the second sensitivity, and N pictures are taken under the condition, and then restored to the location. A first new image of one frame is obtained under the condition of the current sensitivity and the current exposure duration; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be displayed in the state of the second exposure duration and the second sensitivity.
步骤42:对上一步中得到的N帧图像采用第一抓拍模式方案(步骤31-步骤34)获得到第一目标图像,其中,应理解,第二感光度、第二曝光时长以及上述一些可调整的阈值可能会因为场景的变化而产生相应的变化;Step 42: Obtain the first target image by using the first capture mode scheme (step 31 - step 34) for the N frame image obtained in the previous step, wherein it should be understood that the second sensitivity, the second exposure duration, and some of the above may be The adjusted threshold may change accordingly due to changes in the scene;
步骤43:根据第一目标图像和第一新图像得到第二目标图像。具体实现过程中,可以包含但不仅限于下列两种实现方式:Step 43: Obtain a second target image according to the first target image and the first new image. In the specific implementation process, it may include but is not limited to the following two implementation modes:
步骤43方式(1): Step 43 mode (1):
s4311:将第一新图像与参考图像(同获取第一目标图像时选出的参考图像)进行配准,得到第一配准图像;S4311: registering the first new image with the reference image (the reference image selected when the first target image is acquired), to obtain a first registration image;
s4312:根据第一配准图与第一目标图像得到第一差异图像;S4312: obtaining a first difference image according to the first registration image and the first target image;
s4313:根据第一差异图像得到第一鬼影权重图像;S4313: obtaining a first ghost weight image according to the first difference image;
s4314:根据第一鬼影权重图像,将第一配准图像与第一目标图像进行融合,得到第一去鬼影图像;S4314: merging the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image;
s4315:根据第一去鬼影图像和第一目标图像进行像素值的加权融合,得到第二目标图像,具体地,可以包括时域融合s4315(1)、时域融合s4315(3)和频域融合s4315(2)、频域融合s4315(4)四种实现方式。S4315: performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain a second target image, specifically, including time domain fusion s4315(1), time domain fusion s4315(3), and frequency domain Four implementations of s4315(2) and frequency domain fusion s4315(4) are combined.
时域融合s4315(1):分别对第一目标图像和第一去鬼影图像做导向滤波,滤掉短帧信息(现有成熟算法),记为fusion_gf和noghost_gf。将fusion_gf和noghost_gf进行像素值加权融合。具体融合公式如下:Time domain fusion s4315(1): Guide filtering the first target image and the first de-ghost image respectively, filtering out short frame information (existing mature algorithm), and recording it as fusion_gf and noghost_gf. The fusion_gf and noghost_gf are pixel-weighted and fused. The specific fusion formula is as follows:
Figure PCTCN2018109951-appb-000002
Figure PCTCN2018109951-appb-000002
Figure PCTCN2018109951-appb-000003
Figure PCTCN2018109951-appb-000003
其中,v为根据当前ISO档位对应标定好的噪声大小,为常量,W为权重值,取值范围为[0,1)。Where v is a constant noise corresponding to the current ISO gear position, and is a constant value, and W is a weight value, and the value range is [0, 1).
对融合后的图像逐个像素点加回目标细节,对于任何一个像素点,目标细节为第一目标图像和第一去鬼影图像在该像素点导向滤波中过滤掉的细节的较大值,以增加图像细节,得到第二目标图像。Adding back to the target details pixel by pixel for the fused image. For any pixel, the target details are the larger values of the details filtered by the first target image and the first de-ghost image in the pixel-directed filtering. Increase the image detail to get the second target image.
时域融合s4315(3):分别对第一目标图像(记为fusion)和第一去鬼影图像(记为noghost)做下采样,宽高都下采样2倍,分别得到第一目标图像下采样图和第一去鬼影图像下采样图,记为fusionx4和noghostx4。再对fusionx4和noghostx4做上采样,宽高都上采样2倍,分别得到两张与未采样之前大小一致的图,记为fusion’和noghost’。对fusion与fusion’逐像素点求差,得到第一目标图像的采样误差图,记为fusion_se;对noghost与noghost’逐像素点求差,得到第一去鬼影图像的采样误差图,记为noghost_se。分别对fusionx4和noghostx4做导向滤波(现有成熟算法),得到两张滤波后的图像,记为fusion_gf和noghost_gf。将fusion_gf和noghost_gf进行像素值加权融合得到融合图像,记为Fusion,具体融合公式同s4315(1)中的公式。对融合图像逐点加回第一目标图像在导向滤波中过滤掉的细节,对图像做上采样,宽高都上采样2倍,记为FusionUp。对fusion_se和noghost_se这两张采样误差图逐点选取较大值,并逐点加至FusionUp中以增加图像细节,得到第二目标图像。Time domain fusion s4315 (3): Sampling the first target image (denoted as fusion) and the first de-ghost image (denoted as noghost), respectively, the width and height are downsampled twice, respectively, and the first target image is obtained. The sampled map and the first de-ghost image downsampled graph are recorded as fusionx4 and noghostx4. Then, the fusionx4 and noghostx4 are upsampled, and the width and height are upsampled by 2 times, and two images with the same size as before the unsampling are obtained, which are denoted as fusion' and noghost'. For the fusion and fusion' pixel-by-pixel difference, the sampling error map of the first target image is obtained, which is denoted as fusion_se; the difference between the noghost and the noghost' pixel-by-pixel points is obtained, and the sampling error map of the first de-ghost image is obtained, which is recorded as Noghost_se. Guided filtering (existing mature algorithm) for fusionx4 and noghostx4, respectively, to obtain two filtered images, denoted as fusion_gf and noghost_gf. The fusion_gf and noghost_gf are weighted by pixel values to obtain a fused image, which is denoted as Fusion. The specific fusion formula is the same as the formula in s4315(1). The fused image is added back to the point where the first target image is filtered out in the directional filtering. The image is upsampled, and the width and height are upsampled by 2 times, which is recorded as FusionUp. The two sampling error maps of fusion_se and noghost_se are selected point by point, and added to the FusionUp point by point to increase the image detail to obtain the second target image.
频域融合s4315(2):分别对第一目标图像和第一去鬼影图像像做导向滤波(现有成熟算法);分别对两个滤波后的图像做傅里叶变换,并求对应的幅值;将幅值比值作为权重,将两个图像对应的傅里叶频谱进行融合,具体融合公式与时域融合类似。将融合的频谱做傅里叶逆变换,得到一幅融合后的图像。对融合后的图像逐个像素点加回目标细节,对于任何一个像素点,目标细节为第一目标图像和第一去鬼影图像在该像素点导向滤波中过滤掉的细节的较大值,以增加图像细节,得到第二目标图像。Frequency domain fusion s4315(2): Guide filtering of the first target image and the first de-ghost image image respectively (existing mature algorithm); respectively performing Fourier transform on the two filtered images, and obtaining corresponding Amplitude; the amplitude ratio is used as the weight, and the Fourier spectrum corresponding to the two images is fused. The specific fusion formula is similar to the time domain fusion. The inverse spectrum of the fused spectrum is inversely transformed to obtain a fused image. Adding back to the target details pixel by pixel for the fused image. For any pixel, the target details are the larger values of the details filtered by the first target image and the first de-ghost image in the pixel-directed filtering. Increase the image detail to get the second target image.
频域融合s4315(4):分别对第一目标图像(记为fusion)和第一去鬼影图像(记为noghost)做下采样,宽高都下采样2倍,分别得到第一目标图像下采样图和第一去鬼影图像下采样图,记为fusionx4和noghostx4。再对fusionx4和noghostx4做上采样,宽高都上采样2倍,分别得到两张与未采样之前大小一致的图,记为fusion’和noghost’。对fusion与fusion’逐像素点求差,得到第一目标图像的采样误差图,记为fusion_se;对noghost与noghost’逐像素点求差,得到第一去鬼影图像的采样误差图,记为noghost_se。分别对fusionx4和noghostx4做导向滤波(现有成熟算法),记为fusion_gf和noghost_gf。分别对两个滤波后的图像做傅里叶变换,并求对应的幅值;将幅值比值作为权重,将两个图像对应的傅里叶频谱进行融合,具体融合公式与时域融合类似。将融合的频谱做傅里叶逆变换,得到一幅融合后的图像。对融合后的图像逐个像素点加回第一目标图像在导向滤波中过滤掉的细节,并对加回后的图像做宽高都是2倍的上采样,记为FusionUp。对fusion_se和noghost_se这两 张采样误差图逐点选取较大值,并逐点加至FusionUp中以增加图像细节,得到第二目标图像。Frequency domain fusion s4315 (4): Sampling the first target image (denoted as fusion) and the first de-ghost image (denoted as noghost), respectively, the width and height are downsampled twice, respectively, and the first target image is obtained. The sampled map and the first de-ghost image downsampled graph are recorded as fusionx4 and noghostx4. Then, the fusionx4 and noghostx4 are upsampled, and the width and height are upsampled by 2 times, and two images with the same size as before the unsampling are obtained, which are denoted as fusion' and noghost'. For the fusion and fusion' pixel-by-pixel difference, the sampling error map of the first target image is obtained, which is denoted as fusion_se; the difference between the noghost and the noghost' pixel-by-pixel points is obtained, and the sampling error map of the first de-ghost image is obtained, which is recorded as Noghost_se. Guided filtering of fusionx4 and noghostx4 (existing mature algorithms), denoted as fusion_gf and noghost_gf. Fourier transform is performed on the two filtered images respectively, and the corresponding amplitude is obtained. The amplitude ratio is used as the weight to fuse the Fourier spectrum corresponding to the two images. The specific fusion formula is similar to the time domain fusion. The inverse spectrum of the fused spectrum is inversely transformed to obtain a fused image. The merged image is added back to the pixel of the first target image by pixel-by-pixel point, and the added image is double-sampled in width and height, which is recorded as FusionUp. The two sampling error maps of fusion_se and noghost_se are selected point by point, and added to FusionUp point by point to increase the image detail to obtain the second target image.
其中,s4311-s4314与s331-s334涉及的具体算法相同,主要是输入图像的替换,此处不再赘述。Among them, s4311-s4314 is the same as the specific algorithm involved in s331-s334, mainly the replacement of the input image, and will not be described here.
步骤43方式(2): Step 43 mode (2):
s4321:将第一新图像与第一目标图像进行配准,得到第二配准图像;S4321: registering the first new image with the first target image to obtain a second registration image;
s4322:根据第二配准图与第一目标图像得到第二差异图像;S4322: obtaining a second difference image according to the second registration image and the first target image;
s4323:根据第二差异图像得到第二鬼影权重图像;S4323: obtaining a second ghost weight image according to the second difference image;
s4324:根据第二鬼影权重图像,将第二配准图像与第一目标图像进行融合,得到第二去鬼影图像;S4324: merging the second registration image with the first target image according to the second ghost weight image to obtain a second de-ghost image;
s4325:根据第二去鬼影图像和第一目标图像进行像素值的加权融合,得到第二目标图像,具体地,可以包括时域融合和频域融合两种实现方式,可以参照上述s4315(1)、s4315(3)和频域融合s4315(2)、s4315(4)中的任意一个,因算法一样,仅是输入图像的替换,此处不再赘述。S4325: performing weighted fusion of pixel values according to the second de-ghost image and the first target image to obtain a second target image. Specifically, the method may include two implementations of time domain fusion and frequency domain fusion, and may refer to the foregoing s4315 (1). ), s4315 (3) and frequency domain fusion s4315 (2), s4315 (4), as the algorithm, only the replacement of the input image, no longer repeat here.
第三抓拍模式Third capture mode
第三抓拍模式相对第一抓拍模式较为复杂,某种程度上可以理解为第二抓拍模式的一种替换,中高亮场景中常用第二种和第三种抓拍模式。第三抓拍模式的流程图可以参见图6。The third capture mode is more complicated than the first capture mode, and can be understood as a kind of replacement of the second capture mode to some extent, and the second and third capture modes are commonly used in the medium highlight scene. A flowchart of the third capture mode can be seen in FIG. 6.
步骤51:按照相机当前感光度和当前曝光时长拍摄一帧第二新图像;并保持相机当前的感光度不变,将当前曝光时长设置为更低的第三曝光时长,并拍摄N帧图像;Step 51: According to the current sensitivity of the camera and the current exposure duration, one frame of the second new image is taken; and the current sensitivity of the camera is kept unchanged, the current exposure duration is set to a lower third exposure duration, and N frames of images are captured;
一种可能的设计方式中,拍照的动作可以由用户按下快门键进行触发。一种情形下,当用户按下快门时,便以当前的感光度和当前曝光时长获得一帧第二新图像;并保持相机当前的感光度不变,将当前曝光时长设置为更低的第三曝光时长,并在该条件下拍摄N张图片,共得到N+1张图片,以进行后续处理。另一种情形下,当用户按下快门时,便将当前曝光时长设置为更低的第三曝光时长,并在该条件下拍摄N张图片,然后再恢复到所述当前曝光时长在当前感光度的条件下获得一帧第二新图像;共得到N+1张图片,以进行后续处理。此外,预览图像数据流中即可以以所述当前的感光度和当前曝光时长的状态进行显示图像,也可以以所述第三曝光时长和所述当前感光度的状态进行显示图像。In one possible design, the action of taking a picture can be triggered by the user pressing the shutter button. In one case, when the user presses the shutter, the second new image of one frame is obtained with the current sensitivity and the current exposure duration; and the current sensitivity of the camera is kept unchanged, and the current exposure duration is set to be lower. Three exposure durations, and N pictures were taken under this condition, and a total of N+1 pictures were obtained for subsequent processing. In another case, when the user presses the shutter, the current exposure duration is set to a lower third exposure duration, and N pictures are taken under the condition, and then the current exposure time is restored to the current sensitivity. A second new image of one frame is obtained under the condition of degree; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be performed in the state of the third exposure duration and the current sensitivity.
步骤52:对上一步中得到的N帧图像采用第一抓拍模式方案(步骤31-步骤34)获得到第一目标图像,其中,应理解,第三曝光时长以及一些可调整的阈值可能会因为场景的变化而产生相应的变化。Step 52: Obtain the first target image by using the first snapshot mode scheme (step 31 - step 34) for the N frame image obtained in the previous step, wherein it should be understood that the third exposure duration and some adjustable thresholds may be due to Changes in the scene produce corresponding changes.
步骤53:根据第一目标图像和第二新图像得到第三目标图像。具体实现过程中,可以包含但不仅限于下列两种实现方式:Step 53: Obtain a third target image according to the first target image and the second new image. In the specific implementation process, it may include but is not limited to the following two implementation modes:
步骤53方式(1): Step 53 mode (1):
s5311;根据第二新图像,对第一目标图像进行亮度校正得到第四目标图像;具体 地,分别统计第二新图像和第一目标图像的直方图,根据直方图统计累计直方图;将第一目标图像的累计直方图映射到第二新图像的累计直方图上,得到映射关系曲线;对映射关系曲线进行平滑,以抑制曲线出现斜率较大的凸起或者凹陷;根据映射曲线提高第一目标图像的亮度。亮度校正算法为现有成熟算法,不加详述。S5311; performing brightness correction on the first target image according to the second new image to obtain a fourth target image; specifically, respectively calculating a histogram of the second new image and the first target image, and accumulating the histogram according to the histogram; A cumulative histogram of a target image is mapped onto a cumulative histogram of the second new image to obtain a mapping relationship curve; the mapping relationship curve is smoothed to suppress protrusions or depressions having a large slope of the curve; and the first is improved according to the mapping curve The brightness of the target image. The brightness correction algorithm is an existing mature algorithm and will not be described in detail.
s5312:将第二新图像与参考图像(同获取第一目标图像时选出的参考图像)进行配准,得到第三配准图像;S5312: registering the second new image with the reference image (the reference image selected when the first target image is acquired) to obtain a third registration image;
s5313:根据第三配准图与第四目标图像得到第三差异图像;S5313: obtaining a third difference image according to the third registration image and the fourth target image;
s5314:根据第三差异图像得到第三鬼影权重图像;S5314: obtaining a third ghost weight image according to the third difference image;
s5315:根据第三鬼影权重图像,将第三配准图像与第四目标图像进行融合,得到第三去鬼影图像;S5315: merging the third registration image with the fourth target image according to the third ghost image, to obtain a third de-ghost image;
s5316:根据第三去鬼影图像和第四目标图像进行像素值的加权融合,得到第五目标图像,融合算法可参见时域融合s4315(1)、s4315(3)和频域融合s4315(2)、s4315(4)中的任意一个;S5316: performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image, and the fusion algorithm can be referred to as time domain fusion s4315(1), s4315(3), and frequency domain fusion s4315(2) ), any one of s4315(4);
s5317:对第五目标图像和第一目标图像进行金字塔融合处理,得到第三目标图像;具体地,分别构建第五目标图像和第一目标图像的拉普拉斯金字塔,构建图像融合的权重图,并对权重图做归一化及平滑,对归一化平滑后的权重图构建高斯金字塔,根据每层金字塔的权重设置,将所有图像的金字塔在相应层上融合,得到合成金字塔;从拉普拉斯金字塔顶层开始,将该合成金字塔按照金字塔生成的逆过程进行重构,逐一加回每一层信息,恢复融合图像。金字塔融合处理为现有成熟算法,不加详述。S5317: performing pyramid fusion processing on the fifth target image and the first target image to obtain a third target image; specifically, constructing a Laplacian pyramid of the fifth target image and the first target image respectively, and constructing a weight map of the image fusion And normalizing and smoothing the weight map, constructing a Gaussian pyramid on the normalized and smoothed weight map, and merging the pyramids of all the images on the corresponding layer according to the weight setting of each layer of pyramids to obtain a synthetic pyramid; At the top of the Plass pyramid, the synthetic pyramid is reconstructed according to the inverse process of the pyramid generation, and each layer of information is added one by one to restore the fused image. The pyramid fusion process is an existing mature algorithm and will not be described in detail.
此外,s5312-s5316中涉及的算法,可以对应参照s4311-s4315;此处不再赘述。In addition, the algorithm involved in s5312-s5316 can be referred to s4311-s4315; it will not be described here.
步骤53方式(2): Step 53 mode (2):
s5321;根据第二新图像,对第一目标图像进行亮度校正得到第四目标图像;具体地,分别统计第二新图像和第一目标图像的直方图,根据直方图统计累计直方图;将第一目标图像的累计直方图映射到第二新图像的累计直方图上,得到映射关系曲线;对映射关系曲线进行平滑,以抑制曲线出现斜率较大的凸起或者凹陷;根据映射曲线提高第一目标图像的亮度。亮度校正算法为现有成熟算法,不加详述。S5321; performing brightness correction on the first target image according to the second new image to obtain a fourth target image; specifically, respectively calculating a histogram of the second new image and the first target image, and accumulating the histogram according to the histogram; A cumulative histogram of a target image is mapped onto a cumulative histogram of the second new image to obtain a mapping relationship curve; the mapping relationship curve is smoothed to suppress protrusions or depressions having a large slope of the curve; and the first is improved according to the mapping curve The brightness of the target image. The brightness correction algorithm is an existing mature algorithm and will not be described in detail.
s5322:将第二新图像与参考图像(同获取第一目标图像时选出的参考图像)进行配准,得到第三配准图像;S5322: registering the second new image with the reference image (the reference image selected when the first target image is acquired) to obtain a third registration image;
s5323:根据第三配准图与第四目标图像得到第三差异图像;S5323: obtaining a third difference image according to the third registration image and the fourth target image;
s5324:根据第三差异图像得到第三鬼影权重图像;S5324: obtaining a third ghost weight image according to the third difference image;
s5325:根据第三鬼影权重图像,将第三配准图像与第四目标图像进行融合,得到第三去鬼影图像;S5325: merging the third registration image with the fourth target image according to the third ghost image, to obtain a third de-ghost image;
s5326:根据第三去鬼影图像和第四目标图像进行像素值的加权融合,得到第五目标图像,融合算法可参见时域融合s4315(1)、s4315(3)和频域融合s4315(2)、s4315(4)中的一个;S5326: performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image, and the fusion algorithm can refer to time domain fusion s4315(1), s4315(3), and frequency domain fusion s4315(2) One of s4315(4);
s5327:对第五目标图像和第一目标图像进行金字塔融合处理,得到第三目标图像;具体地,分别构建第五目标图像和第一目标图像的拉普拉斯金字塔,构建图像融合的权重图,并对权重图做归一化及平滑,对归一化平滑后的权重图构建高斯金字塔,根据每层金字塔的权重设置,将所有图像的金字塔在相应层上融合,得到合成金字塔; 从拉普拉斯金字塔顶层开始,将该合成金字塔按照金字塔生成的逆过程进行重构,逐一加回每一层信息,恢复融合图像。金字塔融合处理为现有成熟算法,不加详述。S5327: performing pyramid fusion processing on the fifth target image and the first target image to obtain a third target image; specifically, constructing a Laplacian pyramid of the fifth target image and the first target image respectively, and constructing a weight map of the image fusion And normalizing and smoothing the weight map, constructing a Gaussian pyramid on the normalized and smoothed weight map, and merging the pyramids of all the images on the corresponding layer according to the weight setting of each layer of pyramids to obtain a synthetic pyramid; At the top of the Plass pyramid, the synthetic pyramid is reconstructed according to the inverse process of the pyramid generation, and each layer of information is added one by one to restore the fused image. The pyramid fusion process is an existing mature algorithm and will not be described in detail.
此外,s5322-s5326中涉及的算法,可以对应参照s4321-s4325;此处不再赘述。In addition, the algorithm involved in s5322-s5326 can be referred to s4321-s4325; it will not be described here.
本发明提供了一种图像处理方法,该方法能够为相机提供抓拍模式。采用该方法,用户可以在不同的场景下抓拍到清晰的图像,满足用户的抓拍心理,随时随地可以抓拍记录自己的生活场景,大大提高用户体验。The present invention provides an image processing method capable of providing a snap mode for a camera. With this method, the user can capture clear images in different scenes, satisfy the user's snapping psychology, and can capture and record his life scene anytime and anywhere, greatly improving the user experience.
基于上述实施例提供的图像处理方法,本发明实施例提供一种图像处理装置700,所述装置700可以应用于各类拍照设备,如图7所示,该装置700包括获取模块701、确定模块702、去鬼影模块703、均值运算模块704,其中:Based on the image processing method provided by the above embodiments, the embodiment of the present invention provides an image processing apparatus 700. The apparatus 700 can be applied to various types of photographing apparatuses. As shown in FIG. 7, the apparatus 700 includes an obtaining module 701 and a determining module. 702, go ghost module 703, mean operation module 704, wherein:
获取模块701,用于获得N帧图像。该获取模块701可以由处理器调用存储器中的程序指令控制摄像头获取图像实现。The obtaining module 701 is configured to obtain an N frame image. The obtaining module 701 can be implemented by the processor invoking a program instruction in the memory to control the camera to acquire an image.
确定模块702,用于在N帧图像中确定一个参考图像,其余N-1帧图像为待处理图像。该确定模块702可以由处理器调用存储器中的程序指令或者外部输入的程序指令实现。The determining module 702 is configured to determine one reference image in the N frame image, and the remaining N-1 frame images are to be processed images. The determining module 702 can be implemented by a processor invoking a program instruction in a memory or an externally input program instruction.
去鬼影模块703,用于对于所述N-1帧待处理图像中的第i帧图像执行以下步骤1-步骤4,以得到N-1帧去鬼影图像,i取遍不大于N-1的所有正整数;The ghosting module 703 is configured to perform the following steps 1 - 4 on the ith frame image in the N-1 frame to be processed image to obtain an N-1 frame de ghost image, where the i is not greater than N- All positive integers of 1;
步骤1:将所述第i帧图像与所述参考图像进行配准,得到第i配准图像;Step 1: register the image of the ith frame with the reference image to obtain an i-th registration image;
步骤2:根据所述第i配准图像与所述参考图像得到第i差异图像;Step 2: Obtain an ith difference image according to the ith registration image and the reference image;
步骤3:根据所述第i差异图像得到第i鬼影权重图像;Step 3: Obtain an i-th ghost weight image according to the ith difference image;
步骤4:根据所述第i鬼影权重图像,将所述第i配准图像与所述参考图像进行融合,得到第i帧去鬼影图像;Step 4: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image;
该去鬼影模块703可以由处理器实现,可以通过调用本地存储器或云端服务器中的数据以及算法,进行相应计算。The de-ghosting module 703 can be implemented by a processor, and can perform corresponding calculations by calling data and algorithms in the local storage or the cloud server.
均值运算模块704,用于对所述参考图像和所述N-1帧去鬼影图像进行均值运算得到第一目标图像;该均值运算模块704可以由处理器实现,可以通过调用本地存储器或云端服务器中的数据以及算法,进行相应计算实现。The averaging operation module 704 is configured to perform a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image. The averaging operation module 704 can be implemented by a processor, and can be invoked by using a local memory or a cloud. The data and algorithms in the server are implemented accordingly.
在具体实现过程中,获取模块701具体用于执行步骤31中所提到的方法以及可以等同替换的方法;确定模块702具体用于执行步骤32中所提到的方法以及可以等同替换的方法;去鬼影模块703具体用于执行步骤33中所提到的方法以及可以等同替换的方法;均值运算模块704具体用于执行步骤34中所提到的方法以及可以等同替换的方法。其中,上述具体的方法实施例以及实施例中的解释和表述也适用于装置中的方法执行。In a specific implementation process, the obtaining module 701 is specifically configured to perform the method mentioned in step 31 and the method that can be replaced by the same; the determining module 702 is specifically configured to perform the method mentioned in step 32 and the method that can be replaced equally; The de-ghosting module 703 is specifically configured to perform the method mentioned in the step 33 and the method that can be replaced equally; the averaging operation module 704 is specifically configured to perform the method mentioned in the step 34 and the method that can be replaced equally. The above specific method embodiments and the explanations and expressions in the embodiments are also applicable to the method execution in the device.
在一种具体实现过程中,装置700还包括检测模块705,检测模块705用于在检测到以下三种情形同时存在时,控制所述获取模块按照以下第一获取方式获取N帧图像;In a specific implementation process, the device 700 further includes a detection module 705, and the detection module 705 is configured to control the acquisition module to acquire an N frame image according to the following first acquisition manner when detecting that the following three situations exist simultaneously;
情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
情形3:检测到相机处于极高亮的环境中,即当前感光度小于第一预设阈值,且当前曝光时长小于第二预设阈值。Case 3: It is detected that the camera is in an extremely bright environment, that is, the current sensitivity is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
第一获取方式:保持相机当前的感光度和曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度;将相机的曝光时长和感光度分别设置为所述第一曝光时长和所述第一感光度,拍摄N帧图像。The first acquisition mode: keep the product of the current sensitivity of the camera and the exposure duration constant, reduce the exposure duration and increase the sensitivity according to the preset ratio, and obtain the first exposure duration and the first sensitivity; the exposure time and sensitivity of the camera The first exposure time and the first sensitivity are respectively set, and an N-frame image is taken.
在一种具体实现过程中,检测模块705用于在检测到以下三种情形同时存在时,控制所述获取模块按照以下第二获取方式或者第三获取方式获取N帧图像;In a specific implementation process, the detecting module 705 is configured to control the acquiring module to acquire an N frame image according to the following second obtaining manner or the third obtaining manner when detecting that the following three situations exist simultaneously;
情形1:检测到相机的取景图像为运动图像;或,Case 1: The view image of the camera is detected as a moving image; or,
情形2:检测到相机的当前曝光时长超过安全时长;或,Case 2: The current exposure time of the camera is detected to exceed the safe duration; or,
情形3:检测到相机处于中度高亮的环境中,即当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。Case 3: It is detected that the camera is in a medium-high brightness environment, that is, the current sensitivity is in the first preset threshold interval, and the current exposure time is in the second preset threshold interval.
第二获取方式:保持相机当前的感光度和曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第二曝光时长和第二感光度;将相机的曝光时长和感光度分别设置为所述第二曝光时长和所述第二感光度,拍摄N帧图像;The second acquisition mode: keeping the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration and increasing the sensitivity according to the preset ratio, obtaining the second exposure duration and the second sensitivity; and taking the exposure time and sensitivity of the camera Setting the second exposure duration and the second sensitivity respectively, and capturing N frames of images;
第三获取方式:按照所述相机当前的感光度和曝光时长拍摄一帧第二新图像;保持相机当前的感光度不变,将当前曝光时长设置为更低的第三曝光时长;并拍摄N帧图像。The third acquisition mode: shooting a second new image according to the current sensitivity and the exposure duration of the camera; keeping the current sensitivity of the camera unchanged, setting the current exposure duration to a lower third exposure duration; and taking N Frame image.
装置700还可以包括融合模块706,用于根据所述第一目标图像和所述第一新图像得到第二目标图像,或者用于根据所述第一目标图像和所述第二新图像得到第三目标图像。The apparatus 700 may further include a fusion module 706, configured to obtain a second target image according to the first target image and the first new image, or to obtain a first image according to the first target image and the second new image. Three target images.
如具体用于,将第一新图像与参考图像进行配准,得到第一配准图像;根据第一配准图与所述第一目标图像得到第一差异图像;根据第一差异图像得到第一鬼影权重图像;根据第一鬼影权重图像,将第一配准图像与所述第一目标图像进行融合,得到第一去鬼影图像;根据第一去鬼影图像和第一目标图像进行像素值的加权融合,得到第二目标图像。具体用于执行如步骤43方式(1)中所提到的方法以及可以等同替换的方法。For example, the first new image is registered with the reference image to obtain a first registration image; the first difference image is obtained according to the first registration image and the first target image; and the first difference image is obtained according to the first difference image. a ghosting weight image; combining the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image; according to the first de-ghost image and the first target image A weighted fusion of pixel values is performed to obtain a second target image. Specifically, it is used to perform the method mentioned in the method (1) of the step 43 and the method which can be equivalently replaced.
或者具体用于,将第一新图像与第一目标图像进行配准,得到第二配准图像;根据所述第二配准图与所述第一目标图像得到第二差异图像;根据所述第二差异图像得到第二鬼影权重图像;根据所述第二鬼影权重图像,将所述第二配准图像与所述第一目标图像进行融合,得到第二去鬼影图像;根据所述第二去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。具体用于执行如步骤43方式(2)中所提到的方法以及可以等同替换的方法。Or specifically, the first new image is registered with the first target image to obtain a second registration image; and the second difference image is obtained according to the second registration image and the first target image; The second difference image obtains a second ghost image; the second registration image is merged with the first target image according to the second ghost weight image to obtain a second de-ghost image; The second de-ghost image and the first target image perform weighted fusion of pixel values to obtain the second target image. Specifically, it is used to perform the method mentioned in the method (2) of the step 43 and the method which can be equivalently replaced.
或具体用于,根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标图像;将所述第二新图像与所述参考图像进行配准,得到第三配准图像;根据所述第三配准图与所述第四目标图像得到第三差异图像;根据所述第三差异图像得到第三鬼影权重图像;根据所述第三鬼影权重图像,将所述第三配准图像与所述第四目标图像进行融合,得到第三去鬼影图像;根据所述第三去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第五目标图像;对所述第五目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。具体用于执行如步骤53方式(1)中所提到的方法以及可以等同替换的方法。Or specifically for performing brightness correction on the first target image according to the second new image to obtain a fourth target image; and registering the second new image with the reference image to obtain a third registration image; Obtaining a third difference image according to the third registration image and the fourth target image; obtaining a third ghost weight image according to the third difference image; and according to the third ghost weight image, The third registration image is merged with the fourth target image to obtain a third de-ghost image; and the third de-ghost image and the fourth target image are subjected to weighted fusion of pixel values to obtain a fifth target image. And performing pyramid fusion processing on the fifth target image and the first target image to obtain the third target image. Specifically, it is used to perform the method mentioned in the method (1) of the step 53 and the method which can be equivalently replaced.
或具体用于,根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标 图像;将所述第二新图像与所述所述第四目标图像进行配准,得到第四配准图像;根据所述第四配准图与所述第四目标图像得到第四差异图像;根据所述第四差异图像得到第四鬼影权重图像;根据所述第四鬼影权重图像,将所述第四配准图像与所述第四目标图像进行融合,得到第四去鬼影图像;根据所述第四去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第六目标图像;对所述第六目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。具体用于执行如步骤53方式(2)中所提到的方法以及可以等同替换的方法。Or specifically for performing brightness correction on the first target image according to the second new image to obtain a fourth target image; and registering the second new image with the fourth target image to obtain a fourth a registration image; obtaining a fourth difference image according to the fourth registration image and the fourth target image; obtaining a fourth ghost weight image according to the fourth difference image; and according to the fourth ghost weight image, Combining the fourth registration image with the fourth target image to obtain a fourth de-ghost image; performing weighted fusion of pixel values according to the fourth de-ghost image and the fourth target image a sixth target image; performing pyramid fusion processing on the sixth target image and the first target image to obtain the third target image. Specifically, it is used to perform the method mentioned in the method (2) of the step 53 and the method which can be equivalently replaced.
如果用户根据自己的选择直接进入抓拍模式,如上文中会提到的第一抓拍模式或者第二抓拍模式或者第三抓拍模式;这时,终端无需对取景环境进行检测,因为每一个抓拍模式都会有个预先设定的参数规则(预先存储在终端本地或者云端服务器),即每一个抓拍模式都会有对应的感光度和曝光时长,当然还可以包括有其他的性能参数等;一旦进入到特定的抓拍模式,获取模块会自动地调整到对应的感光度和对应的曝光时长进行拍摄。因此如果用户直接采用了抓拍模式,则获取模块就会采用对应的感光度和对应的曝光时长拍摄N张图片,以进行相应模式的后续图像处理。If the user directly enters the capture mode according to his own choice, such as the first capture mode or the second capture mode or the third capture mode mentioned above; at this time, the terminal does not need to detect the framing environment, because each capture mode will have Pre-set parameter rules (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course, may include other performance parameters, etc.; once entering a specific capture Mode, the acquisition module will automatically adjust to the corresponding sensitivity and the corresponding exposure time to shoot. Therefore, if the user directly adopts the capture mode, the acquisition module will take N pictures with corresponding sensitivity and corresponding exposure duration to perform subsequent image processing in the corresponding mode.
上述检测模块705和融合模块706可以由处理器调用存储器中的程序指令或者云端的程序指令实现。The above detection module 705 and the fusion module 706 can be implemented by a processor calling a program instruction in a memory or a program instruction in the cloud.
本发明提供了一种图像处理装置700。采用该装置或含有该装置的终端,用户可以在不同的场景下抓拍到清晰的图像,满足用户的抓拍心理,随时随地可以抓拍记录自己的生活场景,大大提高用户体验。The present invention provides an image processing apparatus 700. By adopting the device or the terminal containing the device, the user can capture a clear image in different scenes, satisfy the user's snapping psychology, and can capture and record his life scene anytime and anywhere, thereby greatly improving the user experience.
应理解以上装置700中的各个模块的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。例如,以上各个模块可以为单独设立的处理元件,也可以集成在终端的某一个芯片中实现,此外,也可以以程序代码的形式存储于控制器的存储元件中,由处理器的某一个处理元件调用并执行以上各个模块的功能。此外各个模块可以集成在一起,也可以独立实现。这里所述的处理元件可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤或以上各个模块可以通过处理器元件中的硬件的集成逻辑电路或者软件形式的指令完成。该处理元件可以是通用处理器,例如中央处理器(英文:central processing unit,简称:CPU),还可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(英文:application-specific integrated circuit,简称:ASIC),或,一个或多个微处理器(英文:digital signal processor,简称:DSP),或,一个或者多个现场可编程门阵列(英文:field-programmable gate array,简称:FPGA)等。It should be understood that the division of each module in the above device 700 is only a division of a logical function, and the actual implementation may be integrated into one physical entity in whole or in part, or may be physically separated. For example, each of the above modules may be a separately set processing component, or may be integrated in one chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and processed by one of the processors. The component calls and executes the functions of each of the above modules. In addition, the individual modules can be integrated or implemented independently. The processing elements described herein can be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or an instruction in a form of software. The processing element may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or one or more microprocessors (English: digital signal processor, referred to as: DSP), or one or more field programmable gate arrays (English: Field-programmable gate array, referred to as: FPGA).
本领域内的技术人员应明白,本发明的实施例可提供为方法、***、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流 程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
尽管已描述了本发明的部分实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括已列举实施例以及落入本发明范围的所有变更和修改。显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也包含这些改动和变型在内。Although a part of the embodiments of the present invention have been described, those skilled in the art can make additional changes and modifications to the embodiments once they are aware of the basic inventive concept. Therefore, the appended claims are intended to be interpreted as including all the modifications and modifications It is apparent that those skilled in the art can make various modifications and variations to the embodiments of the invention without departing from the spirit and scope of the embodiments of the invention. These modifications and variations are intended to be included within the scope of the appended claims.

Claims (30)

  1. 一种图像处理方法,其特征在于,所述方法包括:An image processing method, the method comprising:
    获得N帧图像;Obtaining N frames of images;
    在所述N帧图像中确定一个参考图像,其余的N-1帧图像为待处理图像;Determining one reference image in the N frame image, and the remaining N-1 frame images are to be processed images;
    根据所述N-1帧待处理图像得到N-1帧去鬼影图像;Obtaining an N-1 frame de ghost image according to the image to be processed of the N-1 frame;
    对所述参考图像和所述N-1帧去鬼影图像进行均值运算得到第一目标图像;Performing an average operation on the reference image and the N-1 frame de-ghost image to obtain a first target image;
    其中,根据所述N-1帧待处理图像得到N-1帧去鬼影图像包括:Wherein, obtaining an N-1 frame de-ghost image according to the image to be processed of the N-1 frame comprises:
    对于所述N-1帧待处理图像中的第i帧图像执行步骤1-步骤4,i取遍不大于N-1的所有正整数,Steps 1 - 4 are performed for the ith frame image in the image to be processed of the N-1 frame, and i takes all positive integers not greater than N-1.
    步骤1:将所述第i帧图像与所述参考图像进行配准,得到第i配准图像;Step 1: register the image of the ith frame with the reference image to obtain an i-th registration image;
    步骤2:根据所述第i配准图像与所述参考图像得到第i差异图像;Step 2: Obtain an ith difference image according to the ith registration image and the reference image;
    步骤3:根据所述第i差异图像得到第i鬼影权重图像;Step 3: Obtain an i-th ghost weight image according to the ith difference image;
    步骤4:根据所述第i鬼影权重图像,将所述第i配准图像与所述参考图像进行融合,得到第i帧去鬼影图像。Step 4: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image.
  2. 如权利要求1所述方法,其特征在于,在所述获得N帧图像之前,所述方法还包括:检测到以下三种情形同时存在时,产生控制信号,所述控制信号用于指示获取N帧图像;The method of claim 1, wherein before the obtaining the N frame image, the method further comprises: generating a control signal when the following three situations are detected, the control signal being used to indicate acquisition of N Frame image
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度小于第一预设阈值,且所述当前曝光时长小于第二预设阈值。Case 3: It is detected that the current sensitivity of the camera is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  3. 如权利要求1或2所述方法,其特征在于,所述获得N帧图像包括:The method of claim 1 or 2, wherein said obtaining N frames of images comprises:
    保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度;Keeping the product of the current sensitivity of the camera and the current exposure time constant, reducing the exposure time and increasing the sensitivity according to the preset ratio, and obtaining the first exposure time and the first sensitivity;
    将相机当前的曝光时长和当前的感光度分别设置为所述第一曝光时长和所述第一感光度,并拍摄N帧图像。The current exposure duration and the current sensitivity of the camera are set to the first exposure duration and the first sensitivity, respectively, and an N-frame image is taken.
  4. 如权利要求1所述方法,其特征在于,在所述获得N帧图像之前,所述方法还包括:检测到以下三种情形同时存在时,产生控制信号,所述控制信号用于指示获取N帧图像;The method of claim 1, wherein before the obtaining the N frame image, the method further comprises: generating a control signal when the following three situations are detected, the control signal being used to indicate acquisition of N Frame image
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。Case 3: It is detected that the current sensitivity of the camera is in the first preset threshold interval, and the current exposure duration is in the second preset threshold interval.
  5. 如权利要求1或4所述方法,其特征在于,所述获得N帧图像包括:The method of claim 1 or 4, wherein said obtaining N frames of images comprises:
    保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第二曝光时长和第二感光度;Keeping the product of the current sensitivity of the camera and the current exposure duration unchanged, reducing the exposure duration and increasing the sensitivity according to a preset ratio, and obtaining the second exposure duration and the second sensitivity;
    将相机当前的曝光时长和当前的感光度分别设置为所述第二曝光时长和所述第二感光度,拍摄N帧图像;Setting the current exposure duration of the camera and the current sensitivity to the second exposure duration and the second sensitivity, respectively, and capturing N frames of images;
    所述方法还包括:The method further includes:
    按照相机所述当前的感光度和当前的曝光时长拍摄一帧第一新图像;Shooting a first new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    根据所述第一目标图像和所述第一新图像得到第二目标图像。And obtaining a second target image according to the first target image and the first new image.
  6. 如权利要求5所述方法,其特征在于,所述根据所述第一目标图像和所述第一新图像得到第二目标图像包括:The method of claim 5, wherein the obtaining the second target image based on the first target image and the first new image comprises:
    将所述第一新图像与所述参考图像进行配准,得到第一配准图像;Registering the first new image with the reference image to obtain a first registration image;
    根据所述第一配准图像与所述第一目标图像得到第一差异图像;Obtaining a first difference image according to the first registration image and the first target image;
    根据所述第一差异图像得到第一鬼影权重图像;Obtaining a first ghost weight image according to the first difference image;
    根据所述第一鬼影权重图像,将所述第一配准图像与所述第一目标图像进行融合,得到第一去鬼影图像;And merging the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image;
    根据所述第一去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。And performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain the second target image.
  7. 如权利要求5所述方法,其特征在于,所述根据所述第一目标图像和所述第一新图像得到第二目标图像包括:The method of claim 5, wherein the obtaining the second target image based on the first target image and the first new image comprises:
    将所述第一新图像与所述第一目标图像进行配准,得到第二配准图像;Registering the first new image with the first target image to obtain a second registration image;
    根据所述第二配准图与所述第一目标图像得到第二差异图像;Obtaining a second difference image according to the second registration map and the first target image;
    根据所述第二差异图像得到第二鬼影权重图像;Obtaining a second ghost weight image according to the second difference image;
    根据所述第二鬼影权重图像,将所述第二配准图像与所述第一目标图像进行融合,得到第二去鬼影图像;And merging the second registration image with the first target image according to the second ghost weight image to obtain a second de-ghost image;
    根据所述第二去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。And performing weighted fusion of pixel values according to the second de-ghost image and the first target image to obtain the second target image.
  8. 如权利要求1或4所述方法,其特征在于,获得N帧图像包括:The method of claim 1 or 4, wherein obtaining the N frame image comprises:
    保持相机当前的感光度不变,将当前的曝光时长设置为更低的第三曝光时长;并拍摄N帧图像;Keep the current sensitivity of the camera unchanged, set the current exposure duration to a lower third exposure duration; and take N frames of images;
    所述方法还包括:The method further includes:
    按照相机所述当前的感光度和所述当前的曝光时长拍摄一帧第二新图像;Taking a second new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    根据所述第一目标图像和所述第二新图像得到第三目标图像。And obtaining a third target image according to the first target image and the second new image.
  9. 如权利要求8所述方法,其特征在于,根据所述第一目标图像和所述第二新图像得到第三目标图像包括:The method of claim 8, wherein the obtaining the third target image based on the first target image and the second new image comprises:
    根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标图像;Performing brightness correction on the first target image according to the second new image to obtain a fourth target image;
    将所述第二新图像与所述参考图像进行配准,得到第三配准图像;Registering the second new image with the reference image to obtain a third registration image;
    根据所述第三配准图与所述第四目标图像得到第三差异图像;Obtaining a third difference image according to the third registration map and the fourth target image;
    根据所述第三差异图像得到第三鬼影权重图像;Obtaining a third ghost weight image according to the third difference image;
    根据所述第三鬼影权重图像,将所述第三配准图像与所述第四目标图像进行融合,得到第三去鬼影图像;And merging the third registration image with the fourth target image according to the third ghost image, to obtain a third de-ghost image;
    根据所述第三去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第五目标图像;Performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image;
    对所述第五目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。Pyramid fusion processing is performed on the fifth target image and the first target image to obtain the third target image.
  10. 如权利要求8所述方法,其特征在于,根据所述第一目标图像和所述第二新 图像得到第三目标图像包括:The method of claim 8, wherein the obtaining the third target image based on the first target image and the second new image comprises:
    根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标图像;Performing brightness correction on the first target image according to the second new image to obtain a fourth target image;
    将所述第二新图像与所述所述第四目标图像进行配准,得到第四配准图像;Registering the second new image with the fourth target image to obtain a fourth registration image;
    根据所述第四配准图与所述第四目标图像得到第四差异图像;Obtaining a fourth difference image according to the fourth registration map and the fourth target image;
    根据所述第四差异图像得到第四鬼影权重图像;Obtaining a fourth ghost weight image according to the fourth difference image;
    根据所述第四鬼影权重图像,将所述第四配准图像与所述第四目标图像进行融合,得到第四去鬼影图像;And merging the fourth registration image with the fourth target image according to the fourth ghost weight image to obtain a fourth de-ghost image;
    根据所述第四去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第六目标图像;Performing weighted fusion of pixel values according to the fourth de-ghost image and the fourth target image to obtain a sixth target image;
    对所述第六目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。Pyramid fusion processing is performed on the sixth target image and the first target image to obtain the third target image.
  11. 如权利要求1所述方法,其特征在于,所述获得N帧图像包括:接收到拍摄指令,且检测到以下三种情形同时存在时,保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度;The method of claim 1, wherein the obtaining the N frame image comprises: receiving the shooting instruction, and detecting that the following three situations exist simultaneously, keeping the product of the current sensitivity of the camera and the current exposure time not Varying, reducing the exposure duration according to a preset ratio and increasing the sensitivity to obtain a first exposure duration and a first sensitivity;
    根据所述第一曝光时长和所述第一感光度拍摄N帧图像;Taking N frames of images according to the first exposure duration and the first sensitivity;
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度小于第一预设阈值,且所述当前曝光时长小于第二预设阈值。Case 3: It is detected that the current sensitivity of the camera is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  12. 如权利要求1所述方法,其特征在于,所述获取N帧图像包括:检测到以下三种情形同时存在时,保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度;The method of claim 1, wherein the acquiring the N frame image comprises: maintaining the product of the current sensitivity of the camera and the current exposure duration unchanged when the following three situations are detected, according to a preset ratio Decrease the exposure time and increase the sensitivity to obtain the first exposure time and the first sensitivity;
    接收到拍摄指令,并根据所述第一曝光时长和所述第一感光度拍摄N帧图像;Receiving a shooting instruction, and taking an N-frame image according to the first exposure duration and the first sensitivity;
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度小于第一预设阈值,且所述当前曝光时长小于第二预设阈值。Case 3: It is detected that the current sensitivity of the camera is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  13. 如权利要求1所述方法,其特征在于,所述获取N帧图像包括:接收到拍摄指令,且检测到以下三种情形同时存在时,保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第二曝光时长和第二感光度;The method of claim 1, wherein the acquiring the N frame image comprises: receiving the shooting instruction, and detecting that the following three situations exist simultaneously, keeping the product of the current sensitivity of the camera and the current exposure time not Varying, reducing the exposure duration according to a preset ratio and increasing the sensitivity to obtain a second exposure duration and a second sensitivity;
    根据所述第二曝光时长和所述第二感光度拍摄N帧图像;Taking N frames of images according to the second exposure duration and the second sensitivity;
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。Case 3: It is detected that the current sensitivity of the camera is in the first preset threshold interval, and the current exposure duration is in the second preset threshold interval.
  14. 如权利要求1所述方法,其特征在于,所述方法还包括:The method of claim 1 wherein the method further comprises:
    按照相机所述当前的感光度和当前的曝光时长拍摄一帧第一新图像;Shooting a first new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    将所述第一新图像与所述第一目标图像或所述参考图像进行配准,得到第一配准图像;Registering the first new image with the first target image or the reference image to obtain a first registration image;
    根据所述第一配准图像与所述第一目标图像得到第一差异图像;Obtaining a first difference image according to the first registration image and the first target image;
    根据所述第一差异图像得到第一鬼影权重图像;Obtaining a first ghost weight image according to the first difference image;
    根据所述第一鬼影权重图像,将所述第一配准图像与所述第一目标图像进行融合,得到第一去鬼影图像;And merging the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image;
    根据所述第一去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。And performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain the second target image.
  15. 如权利要求13或14所述方法,其特征在于,所述方法还包括:The method of claim 13 or 14, wherein the method further comprises:
    按照相机所述当前的感光度和当前的曝光时长拍摄一帧第一新图像;Shooting a first new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    将所述第一新图像与所述第一目标图像或所述参考图像进行配准,得到第一配准图像;Registering the first new image with the first target image or the reference image to obtain a first registration image;
    根据所述第一配准图像与所述第一目标图像得到第一差异图像;Obtaining a first difference image according to the first registration image and the first target image;
    根据所述第一差异图像得到第一鬼影权重图像;Obtaining a first ghost weight image according to the first difference image;
    根据所述第一鬼影权重图像,将所述第一配准图像与所述第一目标图像进行融合,得到第一去鬼影图像;And merging the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image;
    根据所述第一去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。And performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain the second target image.
  16. 如权利要求1所述方法,其特征在于,所述获取N帧图像包括:检测到以下三种情形同时存在时,保持相机当前的感光度不变,将当前的曝光时长设置为更低的第三曝光时长;The method of claim 1, wherein the acquiring the N frame image comprises: detecting that the current sensitivity of the camera is unchanged when the following three situations are present, and setting the current exposure duration to a lower level. Three exposure durations;
    接收到拍摄指令,并根据所述第三曝光时长和所述第二感光度拍摄N帧图像;Receiving a shooting instruction, and taking an N-frame image according to the third exposure duration and the second sensitivity;
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。Case 3: It is detected that the current sensitivity of the camera is in the first preset threshold interval, and the current exposure duration is in the second preset threshold interval.
  17. 如权利要求1所述方法,其特征在于,所述获取N帧图像包括:接收到拍摄指令,且检测到以下三种情形同时存在时,保持相机当前的感光度不变,将当前的曝光时长设置为更低的第三曝光时长;The method of claim 1, wherein the acquiring the N frame image comprises: receiving the shooting instruction, and detecting that the current sensitivity of the camera is unchanged when the following three situations are simultaneously present, and the current exposure time is Set to a lower third exposure time;
    根据所述第三曝光时长和所述第二感光度拍摄N帧图像;Taking N frames of images according to the third exposure duration and the second sensitivity;
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。Case 3: It is detected that the current sensitivity of the camera is in the first preset threshold interval, and the current exposure duration is in the second preset threshold interval.
  18. 如权利要求16或17所述方法,其特征在于,所述方法还包括:The method of claim 16 or 17, wherein the method further comprises:
    按照相机所述当前的感光度和所述当前的曝光时长拍摄一帧第二新图像;Taking a second new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标图像;Performing brightness correction on the first target image according to the second new image to obtain a fourth target image;
    将所述第二新图像与所述第四目标图像或所述参考图像进行配准,得到第三配准图像;Registering the second new image with the fourth target image or the reference image to obtain a third registration image;
    根据所述第三配准图与所述第四目标图像得到第三差异图像;Obtaining a third difference image according to the third registration map and the fourth target image;
    根据所述第三差异图像得到第三鬼影权重图像;Obtaining a third ghost weight image according to the third difference image;
    根据所述第三鬼影权重图像,将所述第三配准图像与所述第四目标图像进行融合,得到第三去鬼影图像;And merging the third registration image with the fourth target image according to the third ghost image, to obtain a third de-ghost image;
    根据所述第三去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第五目标图像;Performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image;
    对所述第五目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。Pyramid fusion processing is performed on the fifth target image and the first target image to obtain the third target image.
  19. 一种图像处理装置,其特征在于,所述装置包括:An image processing apparatus, characterized in that the apparatus comprises:
    获取模块,用于获得N帧图像;Obtaining a module for obtaining an N frame image;
    确定模块,用于在所述N帧图像中确定一个参考图像,其余的N-1帧图像为待处理图像;a determining module, configured to determine one reference image in the N frame image, and the remaining N-1 frame images are to be processed images;
    去鬼影模块,用于对于所述N-1帧待处理图像中的第i帧图像执行以下步骤1-步骤4,以得到N-1帧去鬼影图像,i取遍不大于N-1的所有正整数;a ghosting module, configured to perform the following steps 1 - 4 on the ith frame image in the N-1 frame to be processed image to obtain an N-1 frame de ghost image, where the i is not greater than N-1 All positive integers;
    步骤1:将所述第i帧图像与所述参考图像进行配准,得到第i配准图像;Step 1: register the image of the ith frame with the reference image to obtain an i-th registration image;
    步骤2:根据所述第i配准图像与所述参考图像得到第i差异图像;Step 2: Obtain an ith difference image according to the ith registration image and the reference image;
    步骤3:根据所述第i差异图像得到第i鬼影权重图像;Step 3: Obtain an i-th ghost weight image according to the ith difference image;
    步骤4:根据所述第i鬼影权重图像,将所述第i配准图像与所述参考图像进行融合,得到第i帧去鬼影图像;Step 4: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image;
    均值运算模块,对所述参考图像和所述N-1帧去鬼影图像进行均值运算得到第一目标图像。The averaging operation module performs an averaging operation on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  20. 如权利要求19所述装置,其特征在于,所述装置还包括检测模块,所述检测模块用于在检测到以下三种情形同时存在时,控制所述获取模块获取N帧图像;The device according to claim 19, wherein the device further comprises a detecting module, wherein the detecting module is configured to control the acquiring module to acquire an N frame image when detecting that the following three situations exist simultaneously;
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度小于第一预设阈值,且所述当前曝光时长小于第二预设阈值。Case 3: It is detected that the current sensitivity of the camera is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  21. 如权利要求19或20所述装置,其特征在于,所述获取模块用于:The device according to claim 19 or 20, wherein the obtaining module is configured to:
    保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第一曝光时长和第一感光度;Keeping the product of the current sensitivity of the camera and the current exposure time constant, reducing the exposure time and increasing the sensitivity according to the preset ratio, and obtaining the first exposure time and the first sensitivity;
    将相机当前的曝光时长和当前的感光度分别设置为所述第一曝光时长和所述第一感光度,并拍摄N帧图像。The current exposure duration and the current sensitivity of the camera are set to the first exposure duration and the first sensitivity, respectively, and an N-frame image is taken.
  22. 如权利要求19所述装置,其特征在于,所述装置还包括检测模块,所述检测模块用于在检测到以下三种情形同时存在时,控制所述获取模块获取N帧图像。The apparatus according to claim 19, wherein said apparatus further comprises a detection module, said detection module configured to control said acquisition module to acquire an N-frame image upon detecting that the following three situations exist simultaneously.
    情形1:检测到相机的取景图像为运动图像;Case 1: The framing image of the camera is detected as a moving image;
    情形2:检测到相机的当前曝光时长超过安全时长;Case 2: The current exposure time of the camera is detected to exceed the safe duration;
    情形3:检测到相机的当前感光度在第一预设阈值区间,且当前曝光时长在第二预设阈值区间。Case 3: It is detected that the current sensitivity of the camera is in the first preset threshold interval, and the current exposure duration is in the second preset threshold interval.
  23. 如权利要求19或22所述装置,其特征在于,所述获取模块用于:The apparatus according to claim 19 or 22, wherein said obtaining module is configured to:
    保持相机当前的感光度和当前的曝光时长的乘积不变,按照预设比例降低曝光时长并提高感光度,得到第二曝光时长和第二感光度;Keeping the product of the current sensitivity of the camera and the current exposure duration unchanged, reducing the exposure duration and increasing the sensitivity according to a preset ratio, and obtaining the second exposure duration and the second sensitivity;
    将相机当前的曝光时长和当前的感光度分别设置为所述第二曝光时长和所述第二感光度,拍摄N帧图像;Setting the current exposure duration of the camera and the current sensitivity to the second exposure duration and the second sensitivity, respectively, and capturing N frames of images;
    用于按照相机所述当前的感光度和当前的曝光时长拍摄一帧第一新图像;For taking a first new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    所述装置还包括:The device also includes:
    融合模块,用于根据所述第一目标图像和所述第一新图像得到第二目标图像。a fusion module, configured to obtain a second target image according to the first target image and the first new image.
  24. 如权利要求23所述装置,其特征在于,所述融合模块用于:The apparatus of claim 23 wherein said fusion module is for:
    将所述第一新图像与所述参考图像进行配准,得到第一配准图像;Registering the first new image with the reference image to obtain a first registration image;
    根据所述第一配准图与所述第一目标图像得到第一差异图像;Obtaining a first difference image according to the first registration map and the first target image;
    根据所述第一差异图像得到第一鬼影权重图像;Obtaining a first ghost weight image according to the first difference image;
    根据所述第一鬼影权重图像,将所述第一配准图像与所述第一目标图像进行融合,得到第一去鬼影图像;And merging the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image;
    根据所述第一去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。And performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain the second target image.
  25. 如权利要求24所述装置,其特征在于,所述融合模块用于:The apparatus of claim 24 wherein said fusion module is for:
    将所述第一新图像与所述第一目标图像进行配准,得到第二配准图像;Registering the first new image with the first target image to obtain a second registration image;
    根据所述第二配准图与所述第一目标图像得到第二差异图像;Obtaining a second difference image according to the second registration map and the first target image;
    根据所述第二差异图像得到第二鬼影权重图像;Obtaining a second ghost weight image according to the second difference image;
    根据所述第二鬼影权重图像,将所述第二配准图像与所述第一目标图像进行融合,得到第二去鬼影图像;And merging the second registration image with the first target image according to the second ghost weight image to obtain a second de-ghost image;
    根据所述第二去鬼影图像和所述第一目标图像进行像素值的加权融合,得到所述第二目标图像。And performing weighted fusion of pixel values according to the second de-ghost image and the first target image to obtain the second target image.
  26. 如权利要求19或22所述方装置,其特征在于,所述获取模块用于:The party device according to claim 19 or 22, wherein the obtaining module is configured to:
    保持相机当前的感光度不变,将当前的曝光时长设置为更低的第三曝光时长;并拍摄N帧图像;Keep the current sensitivity of the camera unchanged, set the current exposure duration to a lower third exposure duration; and take N frames of images;
    按照相机所述当前的感光度和所述当前的曝光时长拍摄一帧第二新图像;Taking a second new image of a frame according to the current sensitivity of the camera and the current exposure duration;
    所述装置还包括:The device also includes:
    融合模块,用于根据所述第一目标图像和所述第二新图像得到第三目标图像。And a fusion module, configured to obtain a third target image according to the first target image and the second new image.
  27. 如权利要求26所述装置,其特征在于,所述融合模块用于:The apparatus of claim 26 wherein said fusion module is for:
    根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标图像;Performing brightness correction on the first target image according to the second new image to obtain a fourth target image;
    将所述第二新图像与所述参考图像进行配准,得到第三配准图像;Registering the second new image with the reference image to obtain a third registration image;
    根据所述第三配准图与所述第四目标图像得到第三差异图像;Obtaining a third difference image according to the third registration map and the fourth target image;
    根据所述第三差异图像得到第三鬼影权重图像;Obtaining a third ghost weight image according to the third difference image;
    根据所述第三鬼影权重图像,将所述第三配准图像与所述第四目标图像进行融合,得到第三去鬼影图像;And merging the third registration image with the fourth target image according to the third ghost image, to obtain a third de-ghost image;
    根据所述第三去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第五目标图像;Performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image;
    对所述第五目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目 标图像。Pyramid fusion processing is performed on the fifth target image and the first target image to obtain the third target image.
  28. 如权利要求26所述装置,其特征在于,所述融合模块用于:The apparatus of claim 26 wherein said fusion module is for:
    根据第二新图像,对所述第一目标图像进行亮度校正得到第四目标图像;Performing brightness correction on the first target image according to the second new image to obtain a fourth target image;
    将所述第二新图像与所述所述第四目标图像进行配准,得到第四配准图像;Registering the second new image with the fourth target image to obtain a fourth registration image;
    根据所述第四配准图与所述第四目标图像得到第四差异图像;Obtaining a fourth difference image according to the fourth registration map and the fourth target image;
    根据所述第四差异图像得到第四鬼影权重图像;Obtaining a fourth ghost weight image according to the fourth difference image;
    根据所述第四鬼影权重图像,将所述第四配准图像与所述第四目标图像进行融合,得到第四去鬼影图像;And merging the fourth registration image with the fourth target image according to the fourth ghost weight image to obtain a fourth de-ghost image;
    根据所述第四去鬼影图像和所述第四目标图像进行像素值的加权融合,得到第六目标图像;Performing weighted fusion of pixel values according to the fourth de-ghost image and the fourth target image to obtain a sixth target image;
    对所述第六目标图像和所述第一目标图像进行金字塔融合处理,得到所述第三目标图像。Pyramid fusion processing is performed on the sixth target image and the first target image to obtain the third target image.
  29. 一种终端设备,其特征在于,所述终端设备包含存储器、处理器、总线、摄像头,所述存储器、所述摄像头以及所述处理器通过所述总线相连;其中,A terminal device, comprising: a memory, a processor, a bus, a camera, wherein the memory, the camera, and the processor are connected by the bus; wherein
    所述摄像头用于在所述处理器的控制下采集图像信号;The camera is configured to acquire an image signal under the control of the processor;
    所述存储器用于存储计算机程序和指令;The memory is for storing computer programs and instructions;
    所述处理器用于调用所述存储器中存储的所述计算机程序和指令,使所述终端设备执行如权利要求1~18任一项所述方法。The processor is configured to invoke the computer program and instructions stored in the memory to cause the terminal device to perform the method of any one of claims 1-18.
  30. 如权利要求29所述的终端设备,所述终端设备还包括天线***、所述天线***在处理器的控制下,收发无线通信信号实现与移动通信网络的无线通信;所述移动通信网络包括以下的一种或多种:GSM网络、CDMA网络、3G网络、4G网络、FDMA、TDMA、PDC、TACS、AMPS、WCDMA、TDSCDMA、WIFI以及LTE网络。The terminal device according to claim 29, further comprising an antenna system, wherein the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network; the mobile communication network includes the following One or more of: GSM network, CDMA network, 3G network, 4G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
PCT/CN2018/109951 2017-10-13 2018-10-12 Image processing method and device and apparatus WO2019072222A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18866515.2A EP3686845B1 (en) 2017-10-13 2018-10-12 Image processing method and device and apparatus
US16/847,178 US11445122B2 (en) 2017-10-13 2020-04-13 Image processing method and apparatus, and device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710954301.1 2017-10-13
CN201710954301 2017-10-13
CN201710959936.0A CN109671106B (en) 2017-10-13 2017-10-16 Image processing method, device and equipment
CN201710959936.0 2017-10-16

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/847,178 Continuation US11445122B2 (en) 2017-10-13 2020-04-13 Image processing method and apparatus, and device

Publications (1)

Publication Number Publication Date
WO2019072222A1 true WO2019072222A1 (en) 2019-04-18

Family

ID=66100399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109951 WO2019072222A1 (en) 2017-10-13 2018-10-12 Image processing method and device and apparatus

Country Status (1)

Country Link
WO (1) WO2019072222A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2816527A1 (en) * 2012-02-15 2014-12-24 Intel Corporation Method and device for processing digital image, and computer-readable recording medium
CN104349066A (en) * 2013-07-31 2015-02-11 华为终端有限公司 Method and device for generating images with high dynamic ranges
CN105264567A (en) * 2013-06-06 2016-01-20 苹果公司 Methods of image fusion for image stabilizaton
CN105931213A (en) * 2016-05-31 2016-09-07 南京大学 Edge detection and frame difference method-based high-dynamic range video de-ghosting method
CN106506981A (en) * 2016-11-25 2017-03-15 阿依瓦(北京)技术有限公司 Generate the apparatus and method of high dynamic range images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2816527A1 (en) * 2012-02-15 2014-12-24 Intel Corporation Method and device for processing digital image, and computer-readable recording medium
CN105264567A (en) * 2013-06-06 2016-01-20 苹果公司 Methods of image fusion for image stabilizaton
CN104349066A (en) * 2013-07-31 2015-02-11 华为终端有限公司 Method and device for generating images with high dynamic ranges
CN105931213A (en) * 2016-05-31 2016-09-07 南京大学 Edge detection and frame difference method-based high-dynamic range video de-ghosting method
CN106506981A (en) * 2016-11-25 2017-03-15 阿依瓦(北京)技术有限公司 Generate the apparatus and method of high dynamic range images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3686845A4 *

Similar Documents

Publication Publication Date Title
CN109671106B (en) Image processing method, device and equipment
JP6945744B2 (en) Shooting methods, devices, and devices
US10810720B2 (en) Optical imaging method and apparatus
US10827140B2 (en) Photographing method for terminal and terminal
WO2019183813A1 (en) Image capture method and device
WO2019071613A1 (en) Image processing method and device
CN112840634B (en) Electronic device and method for obtaining image
WO2021218551A1 (en) Photographing method and apparatus, terminal device, and storage medium
CN108156392B (en) Shooting method, terminal and computer readable storage medium
KR20170011876A (en) Image processing apparatus and method for operating thereof
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN112468722B (en) Shooting method, device, equipment and storage medium
WO2019072222A1 (en) Image processing method and device and apparatus
CN110677581A (en) Lens switching method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18866515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018866515

Country of ref document: EP

Effective date: 20200423