CN113364964B - Image processing method, image processing apparatus, storage medium, and terminal device - Google Patents

Image processing method, image processing apparatus, storage medium, and terminal device Download PDF

Info

Publication number
CN113364964B
CN113364964B CN202010135338.3A CN202010135338A CN113364964B CN 113364964 B CN113364964 B CN 113364964B CN 202010135338 A CN202010135338 A CN 202010135338A CN 113364964 B CN113364964 B CN 113364964B
Authority
CN
China
Prior art keywords
image
brightness
area
preview
bayer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010135338.3A
Other languages
Chinese (zh)
Other versions
CN113364964A (en
Inventor
姚坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realme Chongqing Mobile Communications Co Ltd
Original Assignee
Realme Chongqing Mobile Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Realme Chongqing Mobile Communications Co Ltd filed Critical Realme Chongqing Mobile Communications Co Ltd
Priority to CN202010135338.3A priority Critical patent/CN113364964B/en
Publication of CN113364964A publication Critical patent/CN113364964A/en
Application granted granted Critical
Publication of CN113364964B publication Critical patent/CN113364964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides an image processing method, an image processing device, a storage medium and a terminal device, and relates to the technical field of image processing. The method comprises the following steps: acquiring a first image, wherein the first image is synthesized by at least one preview image; acquiring a second image, wherein the pixels of the second image are higher than those of the first image; determining an area of the second image, the brightness of which meets a preset condition, as a high-brightness area; and fusing the first image and the second image according to the high brightness region to generate a target image. The method and the device can improve the problem of overexposure in a high-brightness area while retaining the detail information of the image, improve the image quality and have higher practicability.

Description

Image processing method, image processing apparatus, storage medium, and terminal device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a computer-readable storage medium, and a terminal device.
Background
With the increasing demand of people for photographing, it is a common development direction in the industry to improve image pixels and detail performance, for example, an image sensor with millions or even tens of millions of pixels is usually used in a mobile phone, and can support to take ultra-high-definition photos. However, in the product iteration process, the growth rate of the actual photosensitive area is far lower than that of the pixel, so that the pixel density is greatly increased, and the photosensitive area of the unit pixel is smaller and smaller. In the actual shooting process, if the exposure time is short, the finally shot image may have a problem of poor signal-to-noise ratio, and if the exposure time is long, the high-brightness area in the shot image may be overexposed.
Therefore, how to improve the pixels and make the high brightness area of the image have better display effect is an urgent need in the prior art.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and a terminal device, thereby improving, at least to some extent, a problem that a display effect of a high-luminance area in an existing captured image is not ideal.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided an image processing method, the method comprising: acquiring a first image, wherein the first image is synthesized by at least one preview image; acquiring a second image, wherein the pixels of the second image are higher than those of the first image; determining an area with brightness meeting a preset condition in the second image as a high-brightness area; and fusing the first image and the second image according to the high-brightness area to generate a target image.
According to a second aspect of the present disclosure, there is provided an image processing apparatus, the apparatus comprising: the device comprises a first image acquisition module, a second image acquisition module and a third image acquisition module, wherein the first image is synthesized by at least one preview image; the second image acquisition module is used for acquiring a second image, and pixels of the second image are higher than those of the first image; the highlight area determining module is used for determining an area, the brightness of which meets a preset condition, in the second image as a highlight area; and the target image generation module is used for fusing the first image and the second image according to the high-brightness area to generate a target image.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described image processing method.
According to a fourth aspect of the present disclosure, there is provided a terminal device comprising: a processor; a memory for storing executable instructions of the processor; and a quad bayer image sensor for acquiring a raw bayer image; wherein the processor is configured to execute the above-mentioned image processing method via execution of the executable instructions to process the raw bayer image to obtain a corresponding target image.
The technical scheme of the disclosure has the following beneficial effects:
acquiring a first image according to the image processing method, the image processing device, the storage medium and the terminal equipment, wherein the first image is synthesized by at least one preview image; acquiring a second image, wherein the pixels of the second image are higher than those of the first image; determining an area with brightness meeting a preset condition in the second image as a high-brightness area; and fusing the first image and the second image according to the high-brightness area to generate a target image. On one hand, the first image is synthesized by at least one preview image with lower pixels, the preview image has a better expression effect on a high-brightness area, the second image has higher pixel number, higher definition and richer image details, and the first image and the second image are fused, so that the problem of overexposure of the high-brightness area can be improved while the detail information of the second image is kept, and the image quality is improved; on the other hand, the signal-to-noise ratio of the target image can be improved by fusing the first image and the second image, and the image has a good expression effect particularly in a medium-low brightness region of the image; on the other hand, the exemplary embodiment generates the target image by fusing the second image based on the synthesis of the preview image, and has a simple process, low cost in terms of hardware, and high practicability.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
Fig. 1 illustrates a terminal device for implementing an image processing method in the present exemplary embodiment;
fig. 2 shows a flowchart of an image processing method in the present exemplary embodiment;
FIG. 3 shows a schematic diagram of a comparison of a first image and a second image in this exemplary embodiment;
FIG. 4 shows another schematic diagram comparing a first image with a second image in the present exemplary embodiment;
fig. 5 shows a schematic diagram of a color filter array in the present exemplary embodiment;
fig. 6 shows a schematic diagram of obtaining a preview image in the present exemplary embodiment;
fig. 7 shows a schematic diagram of obtaining a second image in the present exemplary embodiment;
FIG. 8 illustrates a sub-flowchart of an image processing method in the present exemplary embodiment;
fig. 9 shows a block diagram of the configuration of an image processing apparatus in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Exemplary embodiments of the present disclosure provide an electronic device for implementing an image processing method. The electronic device comprises at least a processor and a memory for storing executable instructions of the processor, the processor being configured to perform the image processing method via execution of the executable instructions.
The electronic device may be implemented in various forms, and may include mobile devices such as a mobile phone, a tablet computer, a notebook computer, a Personal Digital Assistant (PDA), a navigation device, and a wearable device, and fixed devices such as a desktop computer and a smart television. The following takes the terminal device 100 in fig. 1 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 1 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, terminal device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interface connection relationship between the components is only schematically shown, and does not constitute a limitation on the structure of the terminal device 100. In other embodiments, the terminal device 100 may also interface differently from fig. 1, or a combination of multiple interfaces.
As shown in fig. 1, the terminal device 100 may specifically include: the mobile terminal includes a processor 110, an internal memory 121, an external memory interface 122, a Universal Serial Bus (USB) interface 130, a charging management Module 140, a power management Module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication Module 150, a wireless communication Module 160, an audio Module 170, a speaker 171, a receiver 172, a microphone 173, an earphone interface 174, a sensor Module 180, a display 190, a camera Module 191, an indicator 192, a button 193, and a Subscriber Identity Module (SIM) card interface 194. Wherein the sensor module 180 may include a depth sensor 1801, a pressure sensor 1802, and the like.
Processor 110 may include one or more processing units, such as: the Processor 110 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and are controlled to be executed by the processor 110. In some implementations, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some implementations, the processor 110 may include one or more interfaces. The Interface may include an Integrated Circuit (I2C) Interface, an Inter-Integrated Circuit Sound (I2S) Interface, a Pulse Code Modulation (PCM) Interface, a Universal Asynchronous Receiver/Transmitter (UART) Interface, a Mobile Industry Processor Interface (MIPI), a General-Purpose Input/Output (GPIO) Interface, a Subscriber Identity Module (SIM) Interface, and/or a Universal Serial Bus (USB) Interface, etc. Connections are made with other components of the terminal device 100 through different interfaces.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a MiniUSB interface, a microsusb interface, a USB type c interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal device 100, may also be connected to an earphone to play audio through the earphone, and may also be used to connect the terminal device 100 to other electronic devices, such as a computer and a peripheral device.
The charging management module 140 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives the input of the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 190, the camera module 191, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be disposed in the same device.
The wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied on the terminal device 100. The mobile communication module 150 may include at least one filter, a switch, a power Amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 171, the receiver 172, etc.) or displays an image or video through the display screen 190. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The Wireless Communication module 160 may provide solutions for Wireless Communication applied to the terminal device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless Fidelity (Wi-Fi) network), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
The terminal device 100 implements a display function by the GPU, the display screen 190, the application processor, and the like. The GPU is a microprocessor that is coupled to a display screen 190 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 190 is used to display images, video, and the like. The display screen 190 includes a display panel. The Display panel may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), an Active Matrix Organic Light-Emitting Diode (Active-Matrix Organic Light-Emitting Diode, AMOLED), a flexible Light-Emitting Diode (FLED), a miniature, a Micro-ol, a Quantum dot Light-Emitting Diode (Quantum dot Light-Emitting Diodes, QLED), or the like. In some embodiments, the terminal device 100 may include 1 or N display screens 190, N being a positive integer greater than 1.
The terminal device 100 may implement a shooting function through the ISP, the camera module 191, the video codec, the GPU, the display screen 190, the application processor, and the like.
The ISP is used to process the data fed back by the camera module 191. For example, when a user takes a picture, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, an optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and converting the electric signal into an image visible to the naked eye. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera module 191.
The camera module 191 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a Complementary Metal-Oxide-Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to be converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the terminal device 100 may include 1 or N camera modules 191, where N is a positive integer greater than 1, and if the terminal device 100 includes N cameras, one of the N cameras is a main camera.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal device 100 selects a frequency point, the digital signal processor is used to perform fourier transform or the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record video in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The external memory interface 122 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the terminal device 100. The external memory card communicates with the processor 110 through the external memory interface 122 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the terminal device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 110 executes various functional applications of the terminal device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The terminal device 100 may implement an audio function through the audio module 170, the speaker 171, the receiver 172, the microphone 173, the earphone interface 174, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio inputs into digital audio signals. The audio module 170 may also be used to encode and decode audio signals. The speaker 171, also called a "horn", converts an audio electric signal into a sound signal. The receiver 172, also called "earpiece", is used to convert the electrical audio signal into a sound signal. A microphone 173, also referred to as a "microphone", is used to convert sound signals into electrical signals. The earphone interface 174 is used to connect a wired earphone.
The depth sensor 1801 is used to acquire depth information of a scene. In some embodiments, the depth sensor may be disposed in the camera module 191.
The pressure sensor 1802 is configured to sense a pressure signal, which can be converted to an electrical signal. In some embodiments, the pressure sensor 1802 may be disposed on the display screen 190. The pressure sensors 1802 can be of a wide variety, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like.
In addition, other functional sensors, such as a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be disposed in the sensor module 180 according to actual needs.
The keys 193 include a power-on key, a volume key, and the like. The keys 193 may be mechanical keys. Or may be touch keys. The terminal device 100 may receive a key input, and generate a key signal input related to user setting and function control of the terminal device 100.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 194 is used to connect a SIM card. The SIM card can be brought into and out of contact with the terminal device 100 by being inserted into the SIM card interface 194 or being pulled out from the SIM card interface 194. The terminal device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 194 can support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 194 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 194 is also compatible with different types of SIM cards. The SIM card interface 194 may also be compatible with an external memory card. The terminal device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the terminal device 100 employs esims, namely: an embedded SIM card. The eSIM card may be embedded in the terminal device 100 and cannot be separated from the terminal device 100.
The exemplary embodiment of the present disclosure first provides an image processing method, which can be applied to a terminal device having a photographing function, such as a mobile phone, a tablet computer, and a digital camera. The terminal equipment can be configured with one or more camera modules, and the camera module comprises an image sensor for collecting images. The image sensor can capture raw image data of 6400 ten thousand pixels and 1600 ten thousand pixels by setting different hardware registers.
Fig. 2 shows a flow of the present exemplary embodiment, which may include the following steps S210 to S240:
step S210, a first image is obtained, wherein the first image is synthesized by at least one preview image.
The preview image refers to an image which can preview a shot object and is presented on a display interface after a user opens a camera of the terminal device. Considering that the preview image is intended to present the current photographing effect to the user, a shorter exposure time can be set, and an image of low pixels, for example, an image of 1600 ten thousand pixels, etc., can be acquired. In actual shooting, the present exemplary embodiment may synthesize at least one acquired preview image to obtain a first image. The different preview images that compose the first image may have different exposure times, i.e. the first image may be a high dynamic image. The high dynamic image can provide more dynamic range and image details than a common image, and the exemplary embodiment can better reflect the visual effect in a real environment by synthesizing the first image with the preview image with the best details corresponding to each exposure time according to the preview images with different exposure times.
Step S220, a second image is obtained, wherein the pixels of the second image are higher than those of the first image.
In the present exemplary embodiment, the second image refers to an image captured by the terminal device through the image sensor. Specifically, the image may be a captured image when the terminal device receives a photographing instruction input by a user. Generally, the photographed image has a longer exposure time or a higher number of pixels, such as an image of 6400 ten thousand pixels or the like. The photographing instruction refers to operation information that a user wants to photograph and is received by the terminal device, and the photographing instruction may be operation information that the user touches a specific photographing option on a touch interface, for example, clicking or long-pressing a photographing button on a display interface; or the user presses a physical key of the terminal device, for example, the physical key "volume up" key may control the photographing, and the user may input a photographing instruction by pressing the physical key; in addition, the photographing instruction may also be a voice instruction or a gesture instruction, and the like, which is not specifically limited in this disclosure.
In the present exemplary embodiment, the areas captured by the first image and the second image are the same area, that is, the first image and the object captured in the second image are the same object, and the content thereof is the same, except that the number of pixels of the second image is higher than that of the first image, for example, the number of pixels of the first image may be 1600 ten thousand pixels, and the number of pixels of the second image may be 6400 ten thousand pixels. Therefore, in the present exemplary embodiment, the second image has higher definition and better detail presentation than the first image, for example, as shown in fig. 3, the left image is the second image of 6400 ten thousand pixels, the right image is the first image of 1600 ten thousand pixels, and the detail presentation of the tree in the same area is more real and clear in the second image than in the first image.
In addition, because the exposure time of the second image is longer than that of the preview image, a part of high-brightness area can be oversaturated, so that the dark details can obtain sufficient exposure time, the signal-to-noise ratio of the medium-low brightness area is improved, and the overexposure of the part of high-brightness area can be caused. Fig. 4 is a schematic diagram illustrating a comparison between a first image and a second image in the present exemplary embodiment, and referring to fig. 4, the left diagram illustrates the second image with low dynamic state under 6400 ten thousand pixels, and the right diagram illustrates the first image with high dynamic state under 1600 ten thousand pixels, and the first image and the second image are the same photographed area, in which the sky area 410 is a high-brightness area, and the sky area 410 exhibits an overexposure effect due to the longer exposure time of the second image, while the sky area 410 in the first image exhibits a normal state.
In another embodiment, the terminal device may be configured with two image sensors of different pixels, for example many current cell phones are configured with dual cameras. Wherein, the higher image sensor of pixel is used for shooing the second image, and the lower image sensor of pixel is used for shooing preview image, for example adopts the module of making a video recording of low pixel to gather 1600 ten thousand pixels's preview image, adopts the module of making a video recording of high pixel to gather 6400 ten thousand pixels's second image etc.. Because the image acquisition processes of the two image sensors are completed in one shooting, the exposure degrees of the two image sensors are similar, the resolution of the second image is higher, but the second image is influenced by the photosensitive quantity, the noise points of the second image are more likely to be generated, and the preview image is opposite.
The present disclosure may also capture the preview image and the second image with the same image sensor. Specifically, the image sensor may be a Quad Bayer (Quad Bayer) image sensor, which refers to an image sensor employing a Quad Bayer color filter array. Referring to fig. 5, the left image shows a standard bayer color filter array, the cell array of the filter is arranged as GRBG (or BGGR, GBRG, RGGB), and most image sensors use the standard bayer color filter array; the right diagram in fig. 5 shows a four-bayer color filter array, in which adjacent four cells in the cell array of the filter are the same color, and a part of the image sensor with high pixels currently adopts the four-bayer color filter array. Based on this, the acquiring of the multiple preview images in step S210 can be specifically realized by the following steps:
in a preview state, acquiring at least one original Bayer preview image based on a four Bayer color filter array and acquired by a four Bayer image sensor;
respectively carrying out pixel four-in-one processing and demosaicing processing on each original Bayer preview image to obtain at least one preview image;
and synthesizing the at least one preview image to obtain a first image.
The bayer image is an image in RAW format, and is image data obtained by converting an acquired optical signal into a digital signal by an image sensor, and in the bayer image, each pixel point has only one color of RGB. In the present exemplary embodiment, after an image is captured by using a quad bayer image sensor, the color arrangement of pixels in the image is as shown in the right diagram in fig. 5, and the adjacent four pixels are the same color, that is, the raw bayer preview image described above is obtained.
As shown in fig. 6, firstly, a pixel "four-in-one" process is performed on the raw bayer preview image P, that is, the same-color pixels in 2 × 2 units are combined into one pixel, the bayer preview image Q1 after the pixels are combined is also based on the arrangement of the standard bayer color filter array, and then a Demosaic process (Demosaic) is performed on Q1 to obtain a preview image IMG1 in an RGB format, where the Demosaic process refers to a process of fusing bayer images into complete RGB images. Of course, the RGB format may be further converted into a preview image in grayscale or other color modes according to actual requirements. Then, by combining the plurality of preview images, a first image having the same number of pixels as the preview image can be obtained.
In an exemplary embodiment, the step S220 of acquiring the second image may be implemented by:
in a photographing state, acquiring a raw Bayer photographing image based on a four Bayer color filter array and acquired by a four Bayer image sensor;
de-mosaicing and de-mosaicing the original Bayer photographed image to obtain a second image;
wherein the pixels of the second image are four times as many as the pixels of the first image.
Herein, demosaic processing (Remosaic) refers to fusing a raw bayer image based on a quad bayer color filter array into a bayer image based on a standard bayer color filter array. As shown in fig. 7, demosaicing processing may be performed on the raw bayer photographed image S to obtain a bayer image Q2 based on a standard bayer color filter array; and demosaicing the Bayer image Q2 based on the standard Bayer color filter array to obtain a second image IMG2 in an RGB format. Demosaicing and demosaicing can be realized by different interpolation algorithms, and can also be realized by other related algorithms such as a neural network, and the like, which is not limited by the disclosure. An ISP (Image Signal Processing) unit is usually provided in the terminal device in cooperation with the Image sensor to perform the above-described demosaicing and demosaicing processes. Each pixel of the second image IMG2 has pixel values of three channels of RGB, denoted by C. In addition, the processing procedures of demosaicing and demosaicing may also be combined into a primary interpolation procedure, that is, each pixel point is directly interpolated based on the pixel data in the raw bayer photographed image to obtain the pixel value of the missing color channel, for example, the pixel value may be implemented by using algorithms such as linear interpolation and mean value interpolation, so as to obtain the second image. Compared with Q1 in fig. 6, the pixel of Q2 is increased by 4 times while the area of each pixel is reduced to 1/4, so that the light entering amount of each pixel is reduced; it can be seen that the number of pixels of the second image is four times that of the first image.
In the present exemplary embodiment, the first image IMG1 and the second image IMG2 may employ the same color pattern, but the pixels of the two are different. Assuming that the pixel sizes of the original bayer preview image P and the original bayer photographed image S are both W × H, the pixel size of the second image IMG2 is W × H, the pixel size of the preview image IMG1 is W/2 × H/2, and the pixel size of the corresponding first image is also W/2 × H/2.
In step S230, a region in the second image with a brightness satisfying a preset condition is determined as a high-brightness region.
And step S240, fusing the first image and the second image according to the high-brightness area to generate a target image.
Generally, when a photo is taken, a variety of objects, such as buildings, plants, sky, vehicles, etc., are included in an image, and the objects may exhibit different brightness effects in the image, such as a sky having a higher brightness than a shadow of a building. Therefore, if the exposure time is too long when the sky and the building are present in the second image at the same time, the sky area may be overexposed, and the exposure time is too short, the detailed representation of the building may be unclear, and therefore, the present exemplary embodiment may generate the target image by determining the high luminance area in the second image and fusing the first image and the second image according to the high luminance area.
As can be seen from the above, the first image and the second image have the same captured content, and although the pixels of the first image are low, since the first image is a high-motion image, the high-luminance region is highly represented, and the pixels of the second image are high, but the exposure time is long, and the high-luminance region is poorly represented. Therefore, the first image and the second image can be fused to integrate the advantages of the two images and obtain a target image with higher quality. For example, when the user uses a mobile phone to take a picture, after the image sensor initially acquires image data, the mobile phone outputs a target picture to be displayed on the screen of the mobile phone by performing the above steps S210 to S240, and the target image may be an image stored in an album or an image uploaded to the cloud.
In the present exemplary embodiment, the luminance region may be determined by extracting a luminance feature in the second image, for example, dividing the second image into a plurality of regular or irregular regions, calculating a luminance average value for each region, determining a region where the luminance average value is higher than a preset threshold as a high luminance region, or determining a luminance maximum value for each region, determining a region where the luminance maximum value is higher than a preset threshold as a high luminance region, and the like. In addition, the determination may be performed in combination with an image feature in the second image, for example, the image feature in the second image is extracted, the image feature is analyzed, a portion that may be a high-luminance region is determined, for the portion, a luminance parameter of the portion is extracted, the high-luminance region is specifically determined, and the determination of the high-luminance region in the second image may include various manners, which is not specifically limited by the present disclosure.
In summary, in the present exemplary embodiment, a first image is acquired, wherein the first image is synthesized by at least one preview image; acquiring a second image, wherein the pixels of the second image are higher than those of the first image; determining an area with brightness meeting a preset condition in the second image as a high-brightness area; and fusing the first image and the second image according to the high-brightness area to generate a target image. On one hand, the first image is synthesized by at least one preview image with lower pixels, the preview image has better expression effect on a high-brightness area, the second image has higher pixel number, higher definition and richer image details, and the first image and the second image are fused, so that the problem of overexposure of the high-brightness area can be improved while the detail information of the second image is kept, and the image quality is improved; on the other hand, the signal-to-noise ratio of the target image can be improved by fusing the first image and the second image, and the image has a good expression effect particularly in a medium-low brightness area of the image; on the other hand, the exemplary embodiment generates the target image by fusing the second image based on the synthesis of the preview image, and has the advantages of simple process, low cost in hardware and high practicability.
In an exemplary embodiment, the step S230 may include the following steps:
dividing the second image into a plurality of areas, and judging whether the brightness of each area exceeds a preset threshold value;
and determining the area with the brightness exceeding the preset threshold value as a high-brightness area.
The present exemplary embodiment may divide the second image into a plurality of regions in a specific size or an irregular size. It should be noted that, since the second image and the first image are captured as images of the same region, in order to facilitate fusion, when the second image is divided, the first image may also be divided into regions according to the same division positions, and the regions in the first image and the regions in the second image correspond to each other on a one-to-one basis.
The preset threshold refers to a condition for judging the brightness of the area, and may be a value preset by a system, a value set by a user in a self-defined manner, or the like. In an exemplary embodiment, the preset threshold may be set to L = Lmax 0.95; lmax is an upper limit of the luminance of the high luminance region. That is, when the area brightness exceeds 0.95 times the area maximum brightness, the area may be determined as a high brightness area. It should be noted that, the maximum brightness value of the area may be different in different image formats, display devices or apparatuses, for example, 1023 is an image maximum brightness value in a 10-bit raw format, and therefore, the disclosure is not limited thereto.
In the present exemplary embodiment, the luminance of the region may be determined by various parameters, for example, the determination may be made by converting the RGB image of the second image into HSL (Hue, saturation, luminance), and then extracting the L parameter as the luminance parameter; the second image may also be converted into a gray image, and the luminance area of the second image is determined based on the gray value, and the luminance value is larger for areas with higher gray values, and the like.
The present disclosure may further perform image fusion by generating a mask image of the second image, and specifically, in an exemplary embodiment, as shown in fig. 8, the step S240 may include the following steps:
step S810, generating a mask image according to the high-brightness area;
step S820, extracting a first sub-image from the first image by using the mask image;
step S830, extracting a second sub-image from the second image by using a reverse mask image of the mask image;
and step 840, splicing the first sub-image and the second sub-image to generate a target image.
The present exemplary embodiment may use a specific image, graphic, or object to block a high-luminance region in the second image, the blocked region may be represented by 1, and the non-blocked region may be represented by 0, based on which a mask image of the high-luminance region is generated. Further, by multiplying the mask image by the first image, a region containing a high-luminance region, i.e., the first sub-image, can be extracted. Since the second image has the same shooting contents as those of the first image, it is possible to extract an area image other than the high-luminance area from the second image by the reverse mask image of the mask image. The principle of the inverse mask image is similar to that of the mask image, that is, the area corresponding to the high-brightness area is set to 0, and the other areas are set to 1, and the extraction of the second sub-image can be completed by multiplying the inverse mask image by the second image. And finally, splicing the first sub-image and the second sub-image to generate a final target image.
In an exemplary embodiment, before stitching the first sub-image and the second sub-image, the image processing method may further include:
the first sub-image is up-sampled such that the up-sampled first sub-image and the up-sampled second sub-image have the same pixels.
The present exemplary embodiment may perform upsampling processing on the first sub-image, that is, perform image enlargement on the first sub-image, so that the first sub-image can be displayed on a display device with higher resolution. Specifically, a new pixel point may be inserted between pixel points by using a suitable interpolation algorithm (such as a bilinear interpolation algorithm) on the basis of an image pixel of the first sub-image, so as to amplify the first sub-image, in other words, to replace one pixel point in a region where pixel fusion is performed in the first image with four pixel points. In addition, the upsampling method may also include other various manners, for example, interpolation methods such as a nearest neighbor method and a cubic interpolation method, transposed convolution, upsampling, or pooling-up methods, which is not specifically limited by this disclosure. After the first sub-image is converted into the image with the same number of pixels as the second sub-image, the second image and the first image can be fused to obtain the target image with better image performance.
Exemplary embodiments of the present disclosure also provide an image processing apparatus. As shown in fig. 9, the image processing apparatus 900 may include: a first image obtaining module 910, configured to obtain a first image, where the first image is synthesized by at least one preview image; a second image obtaining module 920, configured to obtain a second image, where pixels of the second image are higher than those of the first image; a highlight region determining module 930, configured to determine a region in the second image, where luminance meets a preset condition, as a highlight region; and a target image generating module 940, configured to fuse the first image and the second image according to the high-brightness region, and generate a target image.
In an exemplary embodiment, the above apparatus is applied to a terminal device including a quad bayer image sensor; a first image acquisition module comprising: the device comprises a preview image acquisition unit, a color filter generation unit and a color filter generation unit, wherein the preview image acquisition unit is used for acquiring at least one original Bayer preview image which is acquired by a four-Bayer image sensor and is based on a four-Bayer color filter array in a preview state; the first processing unit is used for respectively carrying out pixel four-in-one processing and demosaicing processing on the original Bayer preview image to obtain at least one preview image; and the image synthesis unit is used for synthesizing the at least one preview image to obtain a first image.
In an exemplary embodiment, the second image acquisition module includes: a photographed image acquisition unit for acquiring, in a photographed state, an original bayer photographed image based on a quad bayer color filter array acquired by a quad bayer image sensor; the second processing unit is used for performing demosaicing processing and demosaicing processing on the original Bayer photographed image to obtain a second image; wherein the pixels of the second image are four times as many as the pixels of the first image.
In an exemplary embodiment, the highlight region determination module includes: the area dividing unit is used for dividing the second image into a plurality of areas and judging whether the brightness of each area exceeds a preset threshold value; and the area determining unit is used for determining the area with the brightness exceeding the preset threshold value as a high-brightness area.
In an exemplary embodiment, the preset threshold is L = Lmax 0.95; lmax is the upper limit of the luminance of the high luminance region.
In an exemplary embodiment, the target image generation module includes: a mask image generating unit for generating a mask image from the high-luminance region; a first sub-image extracting unit for extracting a first sub-image from the first image using the mask image; a second sub-image extracting unit extracting a second sub-image from the second image using a reverse mask image of the mask image; and the image splicing unit is used for splicing the first sub-image and the second sub-image to generate a target image.
In an exemplary embodiment, the image processing method further includes: and the up-sampling module is used for up-sampling the first sub-image before splicing the first sub-image and the second sub-image so that the up-sampled first sub-image and the up-sampled second sub-image have the same pixels.
In an exemplary embodiment, the exposure time of the preview image is lower than the exposure time of the second image.
In an exemplary embodiment, the first image is synthesized by a plurality of preview images, the plurality of preview images have different exposure times, and the first image is a high dynamic image synthesized by the plurality of preview images.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.), or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above "exemplary method" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 2, fig. 3 or fig. 4 may be performed.
Furthermore, an exemplary embodiment of the present disclosure provides a program product for implementing the above method, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (11)

1. An image processing method, characterized by comprising:
acquiring a first image, wherein the first image is synthesized by at least one preview image, and different preview images for synthesizing the first image have different exposure time;
acquiring a second image, wherein the pixels of the second image are higher than those of the first image, and the shooting contents of the first image and the second image are the same;
determining an area with brightness meeting a preset condition in the second image as a high-brightness area;
according to the high-brightness area, fusing the first image and the second image to generate a target image;
the determining a region in the second image whose brightness meets a preset condition as a high brightness region includes:
dividing the second image into a plurality of areas, and judging whether the brightness of each area exceeds a preset threshold value according to the brightness average value of each area;
and determining the area with the brightness exceeding the preset threshold value as a high-brightness area.
2. The method according to claim 1, wherein the method is applied to a terminal device comprising a quad bayer image sensor;
the acquiring a first image, wherein the first image is synthesized by at least one preview image, comprises:
acquiring at least one primitive Bayer preview image based on a four Bayer color filter array acquired by the four Bayer image sensor in a preview state;
respectively carrying out pixel four-in-one processing and demosaicing processing on each original Bayer preview image to obtain at least one preview image;
and synthesizing the at least one preview image to obtain the first image.
3. The method of claim 2, wherein said acquiring a second image comprises:
in a photographing state, acquiring a raw Bayer photographing image based on a four Bayer color filter array acquired by the four Bayer image sensor;
de-mosaicing and de-mosaicing the original Bayer photographed image to obtain a second image;
wherein the pixels of the second image are four times as many as the pixels of the first image.
4. The method according to claim 1, characterized in that said preset threshold is L = Lmax 0.95; lmax is an upper limit of the luminance of the high luminance region.
5. The method according to claim 1, wherein the fusing the first image and the second image according to the high brightness region to generate a target image comprises:
generating a mask image according to the high-brightness area;
extracting a first sub-image from the first image using the mask image;
extracting a second sub-image from the second image using a reverse mask image of the mask image;
and splicing the first sub-image and the second sub-image to generate the target image.
6. The method of claim 5, wherein prior to stitching the first sub-image and the second sub-image, the method further comprises:
and upsampling the first sub-image to ensure that the upsampled first sub-image and the second sub-image have the same pixel.
7. The method of any of claims 1 to 6, wherein the exposure time of the preview image is lower than the exposure time of the second image.
8. The method according to any one of claims 1 to 6, wherein the first image is composed of a plurality of preview images, the plurality of preview images have different exposure times, and the first image is a high dynamic image composed of the plurality of preview images.
9. An image processing apparatus characterized by comprising:
the device comprises a first image acquisition module, a second image acquisition module and a display module, wherein the first image is synthesized by at least one preview image, and different preview images synthesized into the first image have different exposure time;
the second image acquisition module is used for acquiring a second image, the pixels of the second image are higher than those of the first image, and the shooting contents of the first image and the second image are the same;
the highlight area determining module is used for determining an area, the brightness of which meets a preset condition, in the second image as a highlight area;
the target image generation module is used for fusing the first image and the second image according to the high-brightness area to generate a target image;
the highlight region determination module configured to: dividing the second image into a plurality of areas, and judging whether the brightness of each area exceeds a preset threshold value according to the brightness average value of each area; and determining the area with the brightness exceeding the preset threshold value as a high-brightness area.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 8.
11. A terminal device, comprising:
a processor;
a memory for storing executable instructions of the processor; and
a quad bayer image sensor for acquiring a raw bayer image;
wherein the processor is configured to perform the method of any one of claims 1 to 8 via execution of the executable instructions to process the raw bayer image to obtain a corresponding target image.
CN202010135338.3A 2020-03-02 2020-03-02 Image processing method, image processing apparatus, storage medium, and terminal device Active CN113364964B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010135338.3A CN113364964B (en) 2020-03-02 2020-03-02 Image processing method, image processing apparatus, storage medium, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010135338.3A CN113364964B (en) 2020-03-02 2020-03-02 Image processing method, image processing apparatus, storage medium, and terminal device

Publications (2)

Publication Number Publication Date
CN113364964A CN113364964A (en) 2021-09-07
CN113364964B true CN113364964B (en) 2023-04-07

Family

ID=77523125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010135338.3A Active CN113364964B (en) 2020-03-02 2020-03-02 Image processing method, image processing apparatus, storage medium, and terminal device

Country Status (1)

Country Link
CN (1) CN113364964B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115696063A (en) * 2022-09-13 2023-02-03 荣耀终端有限公司 Photographing method and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566739A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN108337445A (en) * 2018-03-26 2018-07-27 华为技术有限公司 Photographic method, relevant device and computer storage media

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110149111A1 (en) * 2009-12-22 2011-06-23 Prentice Wayne E Creating an image using still and preview
US8885978B2 (en) * 2010-07-05 2014-11-11 Apple Inc. Operating a device to capture high dynamic range images
CN104917950B (en) * 2014-03-10 2018-10-12 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN108989700B (en) * 2018-08-13 2020-05-15 Oppo广东移动通信有限公司 Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN110619593B (en) * 2019-07-30 2023-07-04 西安电子科技大学 Double-exposure video imaging system based on dynamic scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107566739A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN108337445A (en) * 2018-03-26 2018-07-27 华为技术有限公司 Photographic method, relevant device and computer storage media

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Xiaodong Tang等.《A high-dynamic range CMOS camera based on dual-gain channels》.《Journal of Real-Time Image Processing》.2019,第 1-10页. *
徐倩 等.《基于双向光流估计的高动态范围图像去模糊》.《中国体视学与图像分析》.2019,第342-351页. *

Also Published As

Publication number Publication date
CN113364964A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN107395898B (en) Shooting method and mobile terminal
CN111179282B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN111598776A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN110602403A (en) Method for taking pictures under dark light and electronic equipment
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
CN115526787B (en) Video processing method and device
CN112700368A (en) Image processing method and device and electronic equipment
CN111696039B (en) Image processing method and device, storage medium and electronic equipment
CN111343356A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN115314617A (en) Image processing system and method, computer readable medium, and electronic device
CN114466134A (en) Method and electronic device for generating HDR image
CN113364964B (en) Image processing method, image processing apparatus, storage medium, and terminal device
US11521305B2 (en) Image processing method and device, mobile terminal, and storage medium
CN113096022B (en) Image blurring processing method and device, storage medium and electronic device
WO2023160295A1 (en) Video processing method and apparatus
CN111626931B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN115529411A (en) Video blurring method and device
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN114793283A (en) Image encoding method, image decoding method, terminal device, and readable storage medium
CN111294905B (en) Image processing method, image processing device, storage medium and electronic apparatus
CN115460343B (en) Image processing method, device and storage medium
CN116095509B (en) Method, device, electronic equipment and storage medium for generating video frame
CN116095512B (en) Photographing method of terminal equipment and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant