CN113347368B - Video acquisition method and device based on exposure control - Google Patents

Video acquisition method and device based on exposure control Download PDF

Info

Publication number
CN113347368B
CN113347368B CN202010140270.8A CN202010140270A CN113347368B CN 113347368 B CN113347368 B CN 113347368B CN 202010140270 A CN202010140270 A CN 202010140270A CN 113347368 B CN113347368 B CN 113347368B
Authority
CN
China
Prior art keywords
video
parameters
exposure parameters
video stream
exposure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010140270.8A
Other languages
Chinese (zh)
Other versions
CN113347368A (en
Inventor
胡彬林
刘俊
唐娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010140270.8A priority Critical patent/CN113347368B/en
Priority to PCT/CN2021/078677 priority patent/WO2021175211A1/en
Publication of CN113347368A publication Critical patent/CN113347368A/en
Application granted granted Critical
Publication of CN113347368B publication Critical patent/CN113347368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application provides a video acquisition method and device based on exposure control, and relates to the technical field of image acquisition. Wherein, the method comprises the following steps: the same camera module is used, automatic exposure and manual exposure are alternately carried out on the shot image frame in the same time period, wherein the parameters of the manual exposure are set according to the parameters of the automatic exposure, and the camera module has the characteristic of periodic change. In the method, the automatic exposure parameters are slowly changed according to the ambient brightness, so that the automatic exposure video frame can be used for video display; the manual exposure parameters are adjustable, so that the change of the brightness in the scene can be responded quickly, the manual exposure parameters have various values in one period, a video frame can be shot and obtained based on each value of the manual exposure parameters, a better exposure frame can be found in each area in the scene with gradually changed brightness, the video frame is obtained by using the manual exposure parameters for video analysis, and the accuracy of the video analysis is improved.

Description

Video acquisition method and device based on exposure control
Technical Field
The application relates to the technical field of image acquisition, in particular to a video acquisition method and a video acquisition device based on exposure control.
Background
With the rapid development of science and technology, in recent years, intelligent security gradually becomes the key direction for the transformation and upgrading of security enterprises. In a security system, a monitoring camera plays a critical role as the most front-end image acquisition equipment.
For the current monitoring camera, when the monitoring camera works at night, light is weak, and high-quality color images are difficult to collect, so the brightness of the images collected by the camera is increased by adopting a light supplementing mode, but the additional light supplementing lamp cannot supplement light uniformly like sunlight at present. In such a case, when shooting is performed by using the automatic exposure method, since the exposure parameters are calculated by considering the brightness of the entire scene, an Overexposed (Overexposed) or Underexposed (Underexposed) state is inevitably generated in a partial region of the scene, thereby causing a decrease in accuracy of video analysis (also referred to as intelligent video analysis, video content analysis, such as detection, identification and/or analysis of video objects). However, when shooting is performed by using a manual exposure method, since the exposure parameters are not continuously and slowly changed, such a video frame cannot be used for video display as an automatic exposure video frame.
Therefore, how to acquire a video stream through exposure control enables the video stream to meet the watching requirement of a user and improve the accuracy of video analysis, which is a problem to be solved urgently.
Disclosure of Invention
The application provides a video acquisition method and a video acquisition device based on exposure control, which alternately perform automatic exposure and manual exposure in the video frame acquisition process, so that the obtained video stream can meet the watching requirement of a user, and can ensure that a video frame with proper exposure can be found in the follow-up video analysis process.
The application provides a video acquisition method based on exposure control in a first aspect, and the method comprises the following steps: and determining the automatic exposure parameters of the camera module according to the environmental data, and then generating manual exposure parameters according to the automatic exposure parameters. The camera module acquires a first video stream based on the automatic exposure parameter acquisition video frame, and acquires a second video stream based on the manual exposure parameter acquisition video frame. The video frames forming the first video stream and the video frames forming the second video stream are video streams acquired by the camera module alternately in the same time period. Illustratively, the first video stream is composed of odd frames of all video frames captured by the camera module, and the second video stream is composed of even frames of all video frames captured by the camera module, or vice versa. It should be noted that, in addition to the parity frame, which is an alternate acquisition method that is separated by one frame, two frames, or even three frames may be separated. For example, the first video stream is composed of 1, 2, 4, 5, 7, and 8 frames of all video frames (taking the first 9 video frames as an example) captured by the camera module, the second video stream is composed of 3, 6, and 9 frames of all video frames captured by the camera module, that is, the video frames forming the second video stream are separated by 2 frames, and so on. According to the video acquisition method based on exposure control, two paths of video streams exposed by different methods can be obtained. The automatic exposure parameters are slowly changed, so that the video frame acquired by automatic exposure can be used for video display; the manual exposure parameters are adjustable, so that the video frames acquired by manual exposure can respond to the brightness change in the scene to be shot in time, and the acquisition quality of the video is improved.
In another possible implementation, the auto-exposure parameters include: the shutter time, the aperture size and the gain size, and the obtaining of the manual exposure parameters according to the automatic exposure parameters comprises the following steps: for each parameter of the first part of parameters of the automatic exposure parameters, generating a group of sequences according to the parameter and performing periodic expansion to obtain a parameter expansion sequence which is used as a corresponding parameter in the first part of parameters of the manual exposure parameters; and keeping the second part of the automatic exposure parameters unchanged as the second part of the manual exposure parameters. In the video acquisition method based on exposure control, a group of exposure parameter sequences is generated for each parameter of the first part of automatic exposure parameters, and each value in the exposure parameter sequences is a better exposure parameter required by a certain area in a scene to be shot. Under the exposure mode, a video frame can be obtained by shooting based on each value of the manual exposure parameter, so that at least one manual exposure video frame with proper exposure of the area can be obtained for each area, and the adaptability of the video acquisition device to the area with uneven brightness in the scene is improved. Optionally, in a specific implementation, the first partial parameter includes a shutter time and/or a gain size in the automatic exposure parameter, and the second partial parameter includes an aperture size in the automatic exposure parameter.
In another possible implementation manner, the first part of parameters of the automatic exposure parameters include shutter time, for each parameter of the first part of parameters of the automatic exposure parameters, a set of sequences is generated according to the parameter and is periodically extended to obtain the parameter extension sequence, and the corresponding parameters in the first part of parameters as the manual exposure parameters include: and acquiring the shutter time in the automatic exposure parameters, generating a group of shutter time sequences according to the shutter time, and then circularly repeating the shutter time sequences to obtain a shutter time expansion sequence as the shutter time in the manual exposure parameters. In the above implementation, two of the manual exposure parameters: the aperture size and the gain size are consistent with the parameters corresponding to the automatic exposure parameters, and the shutter time in the manual exposure parameters is periodically expanded on the basis of the shutter time contained in the automatic exposure parameters to obtain the parameter expansion sequence, so that the video acquisition device can adapt to a multi-level scene with uneven brightness when shooting video frames based on the manual exposure parameters, and not only a scene with obvious bright-dark contrast. Optionally, the group of shutter time sequences included in the manual exposure parameters is formed by at least 3 shutter time values. For example, if the shutter time of the automatic exposure parameter is 1/500 second, based on the light distribution of the scene to be photographed, the set of shutter time sequences included in the manual exposure parameters includes [1/500 second, 1/250 second, 1/125 second, 1/250 second, 1/500 second ], and the parameter spreading sequences of the shutter time obtained according to the periodic spreading include [1/500 second, 1/250 second, 1/125 second, 1/250 second, 1/500 second, \ 8230; ].
In another possible implementation manner, the first part of the automatic exposure parameters includes a shutter time and a gain, for each parameter of the first part of the automatic exposure parameters, a group of sequences is generated according to the parameter and is periodically extended to obtain the parameter extended sequence, and the corresponding parameter in the first part of the parameters as the manual exposure parameters further includes: obtaining the gain in the automatic exposure parameters and generating a group of gain sequences according to the gain; and circularly repeating a gain size expansion sequence obtained by the gain size sequence to serve as the gain size in the manual exposure parameters. The two parameters of the shutter time and the gain of the manual exposure are linked with the automatic exposure, so that the adaptability of the camera to the scene with gradually changed brightness is improved, the signal-to-noise ratio of the second video stream is also improved, and the image quality is improved. Similar to the example in the above possible implementation manner, the gain in the manual exposure parameter may also be expanded into a group of gain magnitude sequences according to the gain in the automatic exposure parameter, and the gain in the manual exposure parameter is obtained by performing periodic expansion.
In another possible implementation, the shutter time in the manual exposure parameter and/or the gain magnitude in the manual exposure parameter are periodically changed, and the rising and falling changes are included in one period. The change rule of the manual exposure parameters accords with the brightness distribution of a scene shot by the video acquisition device.
In another possible implementation, the second partial parameter of the auto-exposure parameter includes: aperture size. In order to ensure the stability of the camera and not influence the video stream of the automatic exposure branch, the aperture size of the manual exposure parameter and the aperture size of the automatic exposure should be kept consistent.
In another possible implementation, the first video stream is used for video display and the second video stream is used for video analysis. The first video stream uses automatic exposure, and the exposure parameters can only be changed slowly in order to adapt to the visual ability of human eyes, so that the first video stream can be used for video display. The second video stream adopts manual exposure, in order to enable each bright and dark area in the scene to have a better exposure frame, manual exposure parameters need to be changed greatly, the manual exposure parameters are not suitable for people to watch, but the manual exposure parameters can be sent to an intelligent analysis module to identify and analyze various bright and dark areas in the scene.
In another possible implementation manner, the video frames constituting the first video stream and the video frames constituting the second video stream are subjected to a fusion process to obtain a third video stream. That is, the third video stream contains video frames obtained by both the automatic exposure and the manual exposure.
In another possible implementation, the third video stream will be used for video analysis. Compared with the method of analyzing the video stream under the manual exposure, the method of realizing the video analysis provides more video frames for the video analysis, and greatly improves the accuracy of the identification analysis.
In a second aspect, the present application provides a video capture device, the device comprising: the device comprises a camera shooting module, a first image signal processing module and a second image signal processing module. The first image signal processing module is used for determining the automatic exposure parameters of the camera module according to the environmental data; the second image signal processing module is used for obtaining manual exposure parameters according to the automatic exposure parameters; the camera module comprises a lens and an image sensor and is used for acquiring video frames to form a first video stream based on the automatic exposure parameters; and the camera module is further used for acquiring video frames to form a second video stream based on the manual exposure parameters, wherein the video frames forming the first video stream and the video frames forming the second video stream are acquired by the camera module alternately.
In another possible implementation, the auto-exposure parameters include: the second image signal processing module is used for: for each parameter of the first part of parameters of the automatic exposure parameters, generating a group of sequences according to the parameter and performing periodic expansion to obtain a parameter expansion sequence which is used as a corresponding parameter in the first part of parameters of the manual exposure parameters; and keeping the second part parameters of the automatic exposure parameters unchanged as the second part parameters of the manual exposure parameters.
In another possible implementation manner, the first partial parameter of the auto exposure parameter includes a shutter time, and the second image signal processing module is configured to: acquiring the shutter time in the automatic exposure parameters; and generating a group of shutter time sequences according to the shutter time, and circularly repeating the shutter time expansion sequences obtained by the shutter time sequences to be used as the shutter time in the manual exposure parameters.
In another possible implementation manner, the first part of parameters of the auto-exposure parameters further include a gain size, and the second image signal processing module is further configured to: obtaining the gain in the automatic exposure parameters; and generating a group of gain magnitude sequences according to the gain magnitude, and circularly repeating the gain magnitude sequences to obtain gain magnitude extended sequences serving as the gain magnitude in the manual exposure parameters.
In another possible implementation manner, the shutter time in the manual exposure parameter and the gain magnitude in the manual exposure parameter are periodically changed, and the rising and falling changes are included in one period.
In another possible implementation, the second partial parameter of the auto-exposure parameter includes: aperture size.
In another possible implementation, the first video stream is used for video display and the second video stream is used for video analysis.
In another possible implementation manner, the apparatus further includes a fusion module, configured to perform fusion processing on video frames that form the first video stream and video frames that form the second video stream, to obtain a third video stream.
In another possible implementation, the third video stream is used for video analysis.
The technical effects that can be achieved by the video capture device and the possible implementation manner provided in the second aspect are the same as those that can be achieved by the video capture method based on exposure control and the possible implementation manner according to the first aspect, and are not described herein again.
In a third aspect, the present application provides a video surveillance system, comprising the video capture device of the second aspect; a video display module for displaying a first video stream; and the video analysis module is used for analyzing the second and/or third video stream.
In a fourth aspect, the present application provides a computer program product comprising computer readable code instructions which, when executed by a computer, enable the computer to configure a video acquisition apparatus such that the apparatus is capable of performing the method of the first aspect as well as any one of its various possible implementations.
In a fifth aspect, the present application provides a Non-transitory computer-readable storage medium containing computer program code instructions which, when executed by a computer, enable the computer to configure a video acquisition device such that the device is capable of performing the method of the first aspect and any one of its various possible implementations. The non-transitory computer readable storage medium includes one or more of Read-Only Memory (ROM), programmable ROM (Programmable ROM), erasable PROM (Erasable PROM, EPROM), flash Memory (Flash Memory), electrically EPROM (EEPROM), and Hard Drive (Hard Drive).
Drawings
Fig. 1 is a schematic view of an application architecture of a video capture method based on exposure control according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a scene with uneven brightness distribution according to an embodiment of the present application.
Fig. 3 is a schematic diagram of shutter times required for different luminance regions in a scene according to an embodiment of the present application.
Fig. 4 is a schematic flow chart of an automatic exposure method according to an embodiment of the present application.
Fig. 5 is a flowchart illustrating steps of a video capture method based on exposure control according to an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of a video capture method based on exposure control according to an embodiment of the present application.
Fig. 7 is a schematic flowchart of generating manual exposure parameters according to automatic exposure parameters according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a shutter cycle period according to an embodiment of the present application.
Fig. 9 is a schematic diagram of a parity frame fusion structure provided in the embodiment of the present application.
Fig. 10 is a schematic structural diagram of a video capture device according to an embodiment of the present application.
Detailed Description
For ease of understanding, technical terms related to the embodiments of the present application will be first described.
1) Exposure parameters: the exposure parameters at the time of image capturing determine the brightness of the image. The exposure parameters include one or more of shutter time, gain size, or aperture size.
The Shutter Time (ST), also called exposure Time, is a valve that controls the light-in Time. The longer the shutter time, the brighter the captured image, meaning that the amount of light entering is greater; the shorter the shutter time, the less the amount of light entering, the darker the captured image.
The gain refers to a process of amplifying or reducing the photoelectrically converted electrical signal or digital signal. Increasing the gain, wherein the shot image is brighter than the actual scene; the gain is reduced, and the shot image is darker than the actual scene.
The diaphragm is used for controlling the light entering the body through the lens. Increasing the aperture means that the amount of light entering increases, and the image obtained by shooting is brighter; reducing the aperture means reducing the amount of light entering, and the captured image is darker.
2) Auto Exposure (Automatic Exposure): the camera automatically adjusts exposure parameters (shutter time, gain and aperture) for exposure through an algorithm according to the ambient brightness data, so that the brightness of a shot object is normal.
3) Manual Exposure (Manual Exposure): the exposure method is an exposure method for manually setting exposure parameters (shutter time, gain, and aperture) of a camera.
4) Image Signal Processor (ISP): the main function is to process the signal output by the front-end image sensor, and the main functions include white balance adjustment, automatic exposure control and the like.
A schematic system application architecture of the exposure control method provided in the present application will be described below with reference to fig. 1. As shown in fig. 1, the camera system 12 includes a camera module 13, an Image Signal Processor (ISP) 14, and an encoder 15.
The camera module 13 includes a lens module 131, an image sensor 132, and a gain control circuit 133 integrated on the image sensor. The lens module 131 includes a stop in front of the lens and an optical lens assembly, and is mainly used for collecting light from the subject 11. An optical lens assembly is generally a lens assembly composed of one or more pieces of optical glass (or plastic), and may be composed of a lens or a combination of lenses such as a concave lens, a convex lens, an M-type lens, etc. The image sensor 132 may be a CCD image sensor including a charge-coupled device (CCD), a CMOS image sensor including a Complementary Metal Oxide Semiconductor (CMOS), a CIS image sensor including a Contact Image Sensor (CIS), or the like. The image sensor is mainly used for receiving optical signals transmitted by the camera module, converting the optical signals into electric signals and performing photoelectric conversion. The gain control circuit 133 is typically integrated into an image sensor and is primarily used to ensure that a standard video signal can be output for images taken under different scene illumination conditions.
The image signal processor 15 (ISP) is a special Digital Signal Processor (DSP) and mainly functions to perform post-processing on the signal output by the front-end image sensor. Different ISPs are used to match image sensors of different vendors. The superiority of an ISP is important in the overall camera product, which may directly affect the goodness of the image quality presented to the user. The ISP is connected to the previous image pickup module by a dedicated circuit, and may control the image pickup module 13 to adopt different image pickup parameters, that is, to implement the 2A control (Automatic white balance/Automatic exposure) or 3A control (Automatic white balance/Automatic exposure/Automatic Focus) that we often refer to.
The encoder 16 is mainly used for compressing and encoding the signal data according to a standard format, so as to facilitate transmission of the video signal.
In the video monitoring fields of warehouse management, unattended operation, safe cities and the like, images/videos are often required to be shot in low-illumination scenes, and due to insufficient ambient light, a camera is difficult to obtain high-quality color images. In such a case, the imaging quality of the camera is often improved by supplementing light. However, the light of the fill-in light is different from natural light and is non-parallel light, so the fill-in effect of the fill-in light is not as uniform as natural light in the daytime. As shown in fig. 2, the light is sufficient in the central portion of the fill-in region, and weak in the edge portion of the fill-in region. In order to obtain an image with proper exposure, for example, as shown in fig. 3, the area at the innermost circle needs the shortest exposure time because the fill-in light is sufficient, which is assumed to be ST1; the light quantity of the secondary inner circle area is only next to the innermost circle, and the required exposure time is assumed to be ST2; by analogy, the exposure time required for the other two arc regions is ST3 and ST4, and further ST1 < ST2 < ST3 < ST4.
Under the existing automatic exposure technology, the same group of exposure parameters are adopted globally, so that the face is mostly in an overexposure or underexposure state, and the face imaging quality is poor and difficult to identify. The whole implementation process of the existing automatic exposure technology is shown in fig. 4:
in step S41, the camera module collects ambient light, and is usually combined with some filters to improve the imaging quality.
In step S42, the image signal processor ISP calculates the image brightness and outputs the statistical value M. In the automatic exposure technology, the common methods for calculating brightness include an average brightness method, a weight average method, a brightness histogram, and the like, wherein the most common method is the average brightness method. The average brightness method is to average the brightness of all pixels of the image and finally reach the target brightness by continuously adjusting the exposure parameters. The weight averaging method is to set different weights for different areas of the image to calculate the brightness of the image. The luminance histogram method is to calculate the image luminance by assigning different weights to the peaks in the histogram.
Step S43, calculating an absolute value of the difference between the luminance statistic M and a preset target luminance, and determining whether the absolute value is smaller than a threshold. If the difference between the two is smaller than the threshold value, step S44 is performed, otherwise, step S45 is performed.
Step S44, the exposure parameters are output. And applying the exposure parameters to a camera module to acquire images.
And step S45, recalculating the exposure parameters, applying the new exposure parameters to the camera module, repeating the steps S41-S43 until the absolute value of the difference between the statistical brightness M and the preset target brightness is less than the threshold value, and outputting the final exposure parameters.
When the automatic exposure method is used for shooting an object or a human face, only one exposure parameter is adopted globally, and the method is difficult to adapt to a scene with uneven ambient brightness. In addition, in order to ensure the stable change of the brightness of the picture, the exposure parameter of the automatic exposure method is adjusted slowly, when a pedestrian enters a light supplementing central area from a dark area, the human face can be quickly changed into an overexposure state from an underexposure state, so that the image under the normal exposure state is difficult to obtain, and the accuracy of human face identification is greatly reduced.
Therefore, in order to solve the above problem, embodiments of the present application provide an exposure method with periodically varying exposure parameters to improve the imaging quality of faces in different brightness areas in a picture, so as to improve the face recognition rate. The same camera module is used for alternately carrying out automatic exposure and manual exposure at the same time period, wherein the parameters of the manual exposure are set according to the parameters of the automatic exposure and have the characteristic of periodic variation; the video stream obtained with the automatic exposure method is then used for video display and the video stream obtained with the manual exposure method is used for object recognition/analysis. According to the method, the change of scene brightness can be quickly responded by alternately carrying out manual exposure and automatic exposure when the image is collected, and the imaging quality of different brightness areas in the picture is improved. And the types of the manual exposure parameters which are periodically changed are more than two, and the camera can adapt to multi-level scenes with uneven brightness distribution, but not only scenes with obvious bright and dark contrast, so that the adaptability of the camera to scenes with uneven brightness is greatly improved.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Fig. 5 is an overall solution flow of the present invention, and in conjunction with the schematic structure of fig. 6, the method of the present invention includes the following basic steps:
step S51: a first image signal processing device (ISP 1) acquires brightness data of a current shooting scene and generates automatic exposure parameters according to the brightness data.
As a specific implementation, in implementing step S51, the brightness data of the current shooting scene is obtained by the camera through the lens module and determined by the image sensor.
Step S52: the second image signal processing device (ISP 2) generates manual exposure parameters according to the automatic exposure parameters generated by the ISP 1. ISP2 may directly retrieve the values of the automatic exposure parameters from ISP1 and calculate the corresponding manual exposure parameters accordingly.
Step S53: the ISP1 applies the automatic exposure parameters to the camera module, and obtains the odd frame video stream (first video stream) in all the video frames collected by the camera. Namely, when the camera module shoots the odd frame video stream, the exposure parameters (shutter time, aperture size and gain size) adopt the automatic exposure parameter values calculated by the ISP 1.
Step S54: the ISP2 applies the manual exposure parameters to the camera module, and acquires the even frame video stream (second video stream) of all the video frames captured by the camera. That is, when the camera module shoots an even frame video stream, the exposure parameters (shutter time, aperture size and gain size) are switched to the manual exposure parameter values calculated by the ISP 2.
Step S55: and the odd frame video stream is used for video display, and the even frame video stream is transmitted to a target detection and identification module for target analysis.
Since the odd frame video stream adopts an automatic exposure method, the parameters of the automatic exposure method are slowly and continuously changed, and therefore the automatic exposure method can be transmitted to the back end for video display. And the manual exposure parameters adopted by the even frame video stream are not continuously changed, and human eyes cannot adapt to the manual exposure parameters, so that the manual exposure parameters are mainly used for being transmitted to an analysis module for face recognition or target detection and the like.
It should be noted that the odd and even frames are only for distinguishing two video streams, but actually, the odd frames may be acquired based on the manual exposure parameters, the even frames are acquired based on the automatic exposure parameters, or even the first two frames in each three video streams may be acquired based on the automatic exposure parameters, and the third frame is acquired based on the manual exposure parameters, which is not specifically limited in this application. For convenience of describing the technical scheme of the present invention, in all the following embodiments, the automatic exposure parameter shooting is performed on odd frames and the manual exposure parameter shooting is performed on even frames of all video frames acquired by the camera.
Illustratively, on the basis of the above embodiment, in step S52, the ISP2 generates the manual exposure parameters according to the automatic exposure parameters generated by the ISP1, including: for each parameter of the first part of parameters of the automatic exposure parameters, generating a group of sequences according to the parameter and performing periodic expansion to obtain a parameter expansion sequence which is used as a corresponding parameter in the first part of parameters of the manual exposure parameters; and keeping the second part parameters of the automatic exposure parameters unchanged as the second part parameters of the manual exposure parameters. The method comprises two schemes: one is that the first part of parameters comprises the shutter time, and the second part of parameters comprises the aperture size and the gain size; the other is that the first part of parameters comprises the shutter time and the gain size, and the second part of parameters comprises the aperture size. Taking the first scheme as an example, as shown in fig. 7, step S52 specifically includes:
step S71: the ISP2 acquires the automatic exposure parameters acquired by the current ISP 1: shutter time ST AE Aperture size, gain size. ISP1 generates automatic exposure parameters according to the environment brightness data, and ISP2 can directly obtain the shutter time ST from ISP1 AE
Step S72: ISP2 generates a set of shutter sequences that vary up and down: [ K1 × ST [ ] AE ,K2*ST AE ,K3*ST AE ,…,KN*ST AE ]. Among them, K1, K2, \ 8230and KN are values manually set by a user depending on the case, and are, for example, 1,3,7,10. The shutter sequence generated is ST AE ,3*ST AE ,7*ST AE ,10*ST AE ]。
Step S73: and automatically generating a shutter period which changes in a lifting manner according to the shutter sequence. As shown in FIG. 8, the shutter time of the even frame is in accordance with the auto exposure shutter time ST AE The mode of 1 time, 3 times, 7 times, 10 times, 7 times, 3 times and 1 time is used for carrying out periodic ascending and descending change. It should be noted that sometimes the target analysis module cannot receive all data frames due to the limitation of computing power, and only processes every other frame. Therefore, sometimes the parameters of every two even frames need to be kept consistent, for example, the shutter time of the even frame is the automatic exposure shutter time ST AE 1 times, 1 time, 3 times, 7 times, 10 times, 7 times of3 times, 1 time and 1 time.
Step S74: and circularly applying the generated shutter period to the camera module to acquire the even frame video stream. The aperture size and gain size of the manual exposure parameter are made to coincide with those of the automatic exposure.
Here, steps S72 and S73 may be directly combined into one step. That is, a shutter cycle can be directly generated when a shutter sequence is generated, for example, the shutter sequence is [ ST ] AE ,3*ST AE ,7*ST AE ,10*ST AE ,7*ST AE ,3*ST AE ,ST AE ]Then, step S74 is executed to apply the shutter sequence to the camera module cyclically.
It should be noted that the above steps only adopt the linkage of the shutter time in the manual exposure parameter and the shutter time in the automatic exposure parameter, and in order to obtain a more appropriate exposure image, the exposure parameter of the gain can be linked on the basis of the linkage of the shutter time. Illustratively, the gain of the manual exposure may vary periodically by-2 dB, -4dB, -6dB, -8dB, -6dB, -4dB, -2dB based on the gain of the automatic exposure. Assuming that the gain size of the auto-exposure is A, the gain size sequence is [ A-2, A-4, A-6, A-8, A-6, A-4, A-2], and then this spreading sequence is applied to the camera module cyclically. Further, sometimes the parameters within a period do not necessarily need to be completely symmetrical, as the case may be. Furthermore, because the manual exposure parameters are generated based on the automatic exposure parameters, when the automatic exposure parameters change, the cycle of the manual exposure parameter sequence at this time is immediately stopped, and the manual exposure parameter sequence is recalculated based on the new automatic exposure parameters, and then the new manual exposure parameter sequence is cyclically applied to the camera module to acquire the even frame data. According to the technical scheme, through alternate manual exposure and automatic exposure, the brightness change of an object moving in a scene can be quickly responded, and the adaptability of the camera to the scene containing multi-level brightness is improved.
In order to facilitate understanding, the technical scheme is applied to a pedestrian-vehicle mixed traffic scene, and details of implementation of the scheme are shown in detail. The shutter time required by the license plate under the illumination condition of the current scene shot by the camera is 5ms, the shutter time required by the light-colored automobile body is 5-10ms, the shutter time required by the dark-colored automobile body is 10-20ms, the exposure time required by the near face is 5-10ms, and the shutter time required by the far face is 10-20ms. Similarly, for example, the exposure parameters of odd frame automatic exposure, even frame manual exposure, and only shutter time are linked, the specific steps are as follows:
step1: in order to ensure that the license plate and the near face are not overexposed, the shutter time ST of the automatic exposure optical path of the odd frame is assumed AE Is 5ms, ST AE =5ms;
Step2: ISP2 obtaining aperture, gain and shutter time ST of automatic exposure path AE The aperture and gain of manual exposure are consistent with those of automatic exposure, and the shutter time sequence can be designed to [ K1 × ST AE ,K2*ST AE ,K3*ST AE ,K4*ST AE ]Wherein K1, K2, K3 and K4 are respectively 1, 2, 3 and 4;
step3: according to the shutter sequence, a cycle shutter period of the even frame is generated, in order to simultaneously consider the exposure requirements of the license plate, the automobile body and the far and near human face, the shutter time of the even frame can be ST AE 、2ST AE 、3ST AE 、4ST AE 、3ST AE 、2ST AE 、ST AE The lifting mode changes circularly;
step4: if the shutter time of the odd frame has changed due to ambient light changes, let ST be assumed AE If the update time is 4ms, the even frame shutter time of the next period is also updated synchronously, i.e. the even frame shutter time of the next period is recalculated according to Step2 and Step 3. Then, the ISP2 applies the new manual exposure parameters to the camera module, and acquires the even frame data again.
The method provided by the invention can give consideration to scenes with multi-level brightness, and is not only limited to scenes with obvious bright-dark contrast, so that the probability of obtaining an image with proper exposure is greatly improved, the accuracy of target analysis such as face analysis or license plate recognition is obviously improved, and the stability of social environment is guaranteed.
In another embodiment, in order to further improve the imaging effect of manual exposure, a parity frame fusion module may be added before ISP2, as shown in fig. 9, the odd frame video stream obtained based on the automatic exposure parameters and the even frame video stream based on the manual exposure parameters are fused and then transmitted to ISP2 for subsequent target analysis, so as to further improve the accuracy of analysis.
Assuming that the depth of the image captured by the camera (image depth refers to the number of bits used to store each pixel and also measures the color resolution of the image. Image depth determines the number of colors each pixel of a color image may have, or determines the number of gray levels each pixel of a gray scale image may have, which determines the maximum number of colors that may appear in a color image, or the maximum gray scale level in a gray scale image) is 8 bits, the pixel value for each pixel point ranges from 0 to 255. Illustratively, each pixel in the captured image has a pixel value in the range of 0-255, based on the image of the odd frame. Then, odd frame data is adopted for points with pixel values lower than 50, even frame data is adopted for points with pixel values higher than 150, weighted average data of parity frames is adopted for points with pixel values between 50 and 150, and at this time, each pair of parity frames are fused according to the shooting sequence and then transmitted to ISP2 for subsequent video analysis. It should be noted that the above fusion scheme is only an illustrative example, and the fusion mode needs to be adjusted when other acquisition modes are adopted. Such as: when the mode of alternately acquiring the images is that the first two frames of every three frames are automatically exposed and the third frame is manually exposed, every three frames are taken as a group according to the shooting sequence, the first automatically exposed image frame in each group is taken as a reference image, and the latter two frames of images are fused into the reference image according to the method. And then, after the multiple groups of video frames are fused in sequence, the fused video stream can be obtained. In short, the video fusion method needs to be matched with the alternate acquisition method, and the specific fusion rule can be determined according to the experience or actual situation of the user.
According to the video acquisition method based on exposure control, the video stream is divided into manual exposure and automatic exposure, the requirement of a user for watching the video in real time can be met, and the defect that the existing exposure method is slow in response to scene brightness change can be overcome. And the range covered by the manual exposure parameters is wider and has layers, the method can adapt to a multi-layer scene with gradually changed light and shade, and each area is ensured to have a better exposure frame, so that the accuracy of video analysis is greatly improved.
The video capture method based on exposure control provided in the embodiment of the present application is described in detail above with reference to fig. 1 to 9, and the video capture apparatus of the present application will be described below with reference to fig. 10.
Fig. 10 is a video capturing apparatus according to an embodiment of the present disclosure, where the apparatus 100 may include a camera module 101, a first image signal processing module 102, and a second image signal processing module 103.
The first image signal processing module 102 is configured to determine an automatic exposure parameter of the camera module according to the environmental data.
And the second image signal processing module 103 is configured to obtain a manual exposure parameter according to the automatic exposure parameter.
The camera module 101 comprises a lens 1011 and an image sensor 1012, and is used for acquiring video frames to form a first video stream based on the automatic exposure parameters; and the system is also used for acquiring video frames to form a second video stream based on the manual exposure parameters.
And the video frames forming the first video stream and the video frames forming the second video stream are acquired by the camera module alternately. Optionally, the automatic exposure parameters include: the second image signal processing module 103 is specifically configured to generate a group of sequences for each parameter of the first part of parameters of the automatic exposure parameters according to the parameter, and perform periodic expansion to obtain a parameter expansion sequence, where the parameter expansion sequence is used as a corresponding parameter in the first part of parameters of the manual exposure parameters; and keeping the second part of the parameters of the automatic exposure parameters unchanged as the second part of the parameters of the manual exposure.
Optionally, the first part of the parameters of the automatic exposure parameters includes a shutter time, and the second image signal processing module 103 is specifically configured to: acquiring the shutter time in the automatic exposure parameters; and generating a group of shutter time sequences according to the shutter time, and circularly repeating the shutter time expansion sequences obtained by the shutter time sequences to be used as the shutter time in the manual exposure parameters.
Optionally, the first part of parameters of the auto-exposure parameters further includes a gain size, and the second image signal processing module 103 is further configured to: obtaining the gain in the automatic exposure parameters; and generating a group of gain size sequences according to the gain sizes, and circularly repeating the gain size sequences to obtain gain size expansion sequences serving as the gain sizes in the manual exposure parameters.
Optionally, the shutter time in the manual exposure parameter and the gain magnitude in the manual exposure parameter are changed periodically, and include a rising and falling change in one period.
Optionally, the second part of the parameters of the automatic exposure parameters includes: aperture size.
Optionally, the first video stream is used for video display, and the second video stream is used for detection, identification and/or analysis of video objects.
Optionally, the apparatus further includes a fusion module, configured to perform fusion processing on the video frames constituting the first video stream and the video frames constituting the second video stream to obtain a third video stream.
As another possible embodiment, the present application further provides a video monitoring system, including: the video acquisition device provided by the embodiment of the device is used for acquiring the first video stream according to an automatic exposure method and acquiring the second video stream according to a manual exposure method, wherein the manual exposure parameter and the automatic exposure parameter acquiring part are linked; a video display module for displaying the first video stream; and the video analysis module is used for analyzing the second video stream and/or the third video stream.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware.
The present invention also provides a computer program product embodiment comprising computer readable code instructions which, when executed by a computer, enable the computer to configure a video acquisition device such that the device is able to perform a method as described in any of its various possible implementations of the method embodiment.
The present invention also provides a Non-transitory (Non-transient) computer-readable storage medium embodiment containing computer program code instructions that, when executed by a computer, enable the computer to configure a video acquisition device such that the device can perform the method of any of its various possible implementations. The non-transitory computer readable storage medium includes one or more of Read-Only Memory (ROM), programmable ROM (Programmable ROM), erasable PROM (Erasable PROM, EPROM), flash Memory (Flash Memory), electrically EPROM (EEPROM), and Hard Drive (Hard Drive).
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the same; although the present invention has been described in detail with reference to the foregoing examples, it should be noted that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. A video acquisition method based on exposure control is characterized by comprising the following steps:
determining automatic exposure parameters of the camera module according to the environmental data;
obtaining manual exposure parameters according to the automatic exposure parameters, comprising: generating a group of sequences according to the shutter time of the first part of parameters of the automatic exposure parameters, performing periodic expansion to obtain a parameter expansion sequence, and generating a group of shutter sequences which change in a lifting mode according to the parameter expansion sequence: [ K1 ]
Figure QLYQS_1
, K2*/>
Figure QLYQS_2
, K3*/>
Figure QLYQS_3
,…, KN*/>
Figure QLYQS_4
]Wherein K1, K2, \8230KNis a value manually set by a user according to the situation, and the \ is greater than or equal to>
Figure QLYQS_5
The shutter time in the automatic exposure parameters is taken as the corresponding parameter in the first part of the parameters of the manual exposure parameters, the shutter time in the manual exposure parameters is changed periodically and includes ascending and descending changes in one period, and the second part of the parameters of the automatic exposure parameters is kept unchanged to be taken as the second part of the parameters of the manual exposure parameters;
based on the automatic exposure parameters, the video frames acquired by the camera module form a first video stream;
based on the manual exposure parameters, the video frames acquired by the camera module form a second video stream;
and the video frames forming the first video stream and the video frames forming the second video stream are acquired by the camera module alternately.
2. The method of claim 1, wherein the shutter sequence as a corresponding one of the first portion of the manual exposure parameters comprises: and circularly repeating the shutter time expansion sequence obtained by the shutter sequence to serve as the shutter time in the manual exposure parameters.
3. The method of claim 2, wherein the first portion of the auto-exposure parameters further comprises a gain magnitude, and wherein obtaining manual exposure parameters from the auto-exposure parameters further comprises:
obtaining the gain in the automatic exposure parameters;
generating a group of gain size sequences according to the gain sizes;
and circularly repeating the gain size sequence to obtain a gain size expansion sequence as the gain size in the manual exposure parameters.
4. The method of any of claims 2-3, wherein the second partial parameter of the auto-exposure parameters comprises: aperture size.
5. The method of claim 4, wherein the first video stream is used for video display and the second video stream is used for video analysis.
6. The method of claim 5, further comprising:
and carrying out fusion processing on the video frames forming the first video stream and the video frames forming the second video stream to obtain a third video stream.
7. A video capture device, the device comprising: the camera shooting module comprises a first image signal processing module and a second image signal processing module;
the first image signal processing module is used for determining the automatic exposure parameters of the camera module according to the environmental data;
the second image signal processing module is used for processing the image signal according to the selfThe method for obtaining the manual exposure parameter by the dynamic exposure parameter specifically comprises the following steps: generating a group of sequences according to the shutter time of the first part of parameters of the automatic exposure parameters, performing periodic expansion to obtain a parameter expansion sequence, and generating a group of shutter sequences which change in a lifting mode according to the parameter expansion sequence: [ K1 ]
Figure QLYQS_6
, K2*/>
Figure QLYQS_7
, K3*/>
Figure QLYQS_8
,…, KN*/>
Figure QLYQS_9
]Wherein K1, K2, \8230KNis a value manually set by a user according to the situation, and the \ is greater than or equal to>
Figure QLYQS_10
Taking the shutter time in the automatic exposure parameters as the corresponding parameters in the first part parameters of the manual exposure parameters, wherein the shutter time in the manual exposure parameters is periodically changed and comprises ascending and descending changes in one period, and keeping the second part parameters of the automatic exposure parameters unchanged as the second part parameters of the manual exposure parameters;
the camera module comprises a lens and an image sensor, and is used for acquiring video frames to form a first video stream based on the automatic exposure parameters and acquiring video frames to form a second video stream based on the manual exposure parameters;
and the video frames forming the first video stream and the video frames forming the second video stream are acquired by the camera module alternately.
8. The apparatus of claim 7, wherein the first portion of the auto-exposure parameters comprises a shutter time, and wherein the second image signal processing module is configured to:
acquiring the shutter time in the automatic exposure parameters;
generating a set of shutter sequences according to the shutter time;
and circularly repeating the shutter time expansion sequence obtained by the shutter sequence to serve as the shutter time in the manual exposure parameters.
9. The apparatus of claim 8, wherein the first portion of the auto-exposure parameters further comprises a gain magnitude, and wherein the second image signal processing module is further configured to:
obtaining the gain in the automatic exposure parameters;
generating a group of gain size sequences according to the gain sizes;
and circularly repeating the gain size sequence to obtain a gain size expansion sequence as the gain size in the manual exposure parameters.
10. The apparatus of any of claims 7-9, wherein the second partial parameter of the auto-exposure parameters comprises: aperture size.
11. The apparatus of claim 10, wherein the first video stream is used for video display and the second video stream is used for video analysis.
12. The apparatus according to claim 11, further comprising a fusion module configured to perform fusion processing on the video frames constituting the first video stream and the video frames constituting the second video stream to obtain a third video stream.
13. A video surveillance system, comprising:
the video capture device of any of claims 7-12;
a video display module for displaying the first video stream;
and the video analysis module is used for analyzing the second video stream and/or a third video stream, wherein the third video stream is obtained by fusing the video frames of the first video stream and the video frames forming the second video stream.
CN202010140270.8A 2020-03-03 2020-03-03 Video acquisition method and device based on exposure control Active CN113347368B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010140270.8A CN113347368B (en) 2020-03-03 2020-03-03 Video acquisition method and device based on exposure control
PCT/CN2021/078677 WO2021175211A1 (en) 2020-03-03 2021-03-02 Exposure control-based video capture method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010140270.8A CN113347368B (en) 2020-03-03 2020-03-03 Video acquisition method and device based on exposure control

Publications (2)

Publication Number Publication Date
CN113347368A CN113347368A (en) 2021-09-03
CN113347368B true CN113347368B (en) 2023-04-18

Family

ID=77467381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010140270.8A Active CN113347368B (en) 2020-03-03 2020-03-03 Video acquisition method and device based on exposure control

Country Status (2)

Country Link
CN (1) CN113347368B (en)
WO (1) WO2021175211A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117176922B (en) * 2023-10-31 2023-12-29 安徽三禾一信息科技有限公司 Intelligent thermal power plant monitoring method and system based on Internet of things

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101137013A (en) * 2006-09-01 2008-03-05 索尼株式会社 Shooting device, method and program
CN101893804A (en) * 2010-05-13 2010-11-24 杭州海康威视软件有限公司 Exposure control method and device
CN104243834A (en) * 2013-06-08 2014-12-24 杭州海康威视数字技术股份有限公司 Image streaming dividing control method and device of high-definition camera
CN106254782A (en) * 2016-09-28 2016-12-21 北京旷视科技有限公司 Image processing method and device and camera
CN107896316A (en) * 2017-11-29 2018-04-10 合肥寰景信息技术有限公司 Digital video intelligent monitoring system based on dual camera
CN110619593A (en) * 2019-07-30 2019-12-27 西安电子科技大学 Double-exposure video imaging system based on dynamic scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9729782B2 (en) * 2015-06-05 2017-08-08 Digital Signal Corporation System and method for intelligent camera control
CN110636227B (en) * 2019-09-24 2021-09-10 合肥富煌君达高科信息技术有限公司 High dynamic range HDR image synthesis method and high-speed camera integrating same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101137013A (en) * 2006-09-01 2008-03-05 索尼株式会社 Shooting device, method and program
CN101893804A (en) * 2010-05-13 2010-11-24 杭州海康威视软件有限公司 Exposure control method and device
CN104243834A (en) * 2013-06-08 2014-12-24 杭州海康威视数字技术股份有限公司 Image streaming dividing control method and device of high-definition camera
CN106254782A (en) * 2016-09-28 2016-12-21 北京旷视科技有限公司 Image processing method and device and camera
CN107896316A (en) * 2017-11-29 2018-04-10 合肥寰景信息技术有限公司 Digital video intelligent monitoring system based on dual camera
CN110619593A (en) * 2019-07-30 2019-12-27 西安电子科技大学 Double-exposure video imaging system based on dynamic scene

Also Published As

Publication number Publication date
CN113347368A (en) 2021-09-03
WO2021175211A1 (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN107948519B (en) Image processing method, device and equipment
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN108055452B (en) Image processing method, device and equipment
CN108712608B (en) Terminal equipment shooting method and device
CN108024054B (en) Image processing method, device, equipment and storage medium
US8106965B2 (en) Image capturing device which corrects a target luminance, based on which an exposure condition is determined
US8937677B2 (en) Digital photographing apparatus, method of controlling the same, and computer-readable medium
CN107948538B (en) Imaging method, imaging device, mobile terminal and storage medium
US9596400B2 (en) Image pickup apparatus that periodically changes exposure condition, a method of controlling image pickup apparatus, and storage medium
CN108111749B (en) Image processing method and device
US20100328498A1 (en) Shooting parameter adjustment method for face detection and image capturing device for face detection
CN107846556B (en) Imaging method, imaging device, mobile terminal and storage medium
JP4600684B2 (en) Imaging apparatus and imaging method
CN102892008A (en) Dual image capture processing
CN108616689B (en) Portrait-based high dynamic range image acquisition method, device and equipment
CN108156369B (en) Image processing method and device
CN110072052A (en) Image processing method, device, electronic equipment based on multiple image
JP5728498B2 (en) Imaging apparatus and light emission amount control method thereof
JP2014033276A (en) Image processing device, image processing method and program
JP5898509B2 (en) Imaging apparatus, control method therefor, program, and storage medium
JP6467190B2 (en) EXPOSURE CONTROL DEVICE AND ITS CONTROL METHOD, IMAGING DEVICE, PROGRAM, AND STORAGE MEDIUM
CN113347368B (en) Video acquisition method and device based on exposure control
CN110266967B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2010011153A (en) Imaging apparatus, imaging method and program
JP4553570B2 (en) Auto focus camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant