CN117201930B - Photographing method and electronic equipment - Google Patents

Photographing method and electronic equipment Download PDF

Info

Publication number
CN117201930B
CN117201930B CN202311481102.5A CN202311481102A CN117201930B CN 117201930 B CN117201930 B CN 117201930B CN 202311481102 A CN202311481102 A CN 202311481102A CN 117201930 B CN117201930 B CN 117201930B
Authority
CN
China
Prior art keywords
image
electronic equipment
preview
electronic device
exposure time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311481102.5A
Other languages
Chinese (zh)
Other versions
CN117201930A (en
Inventor
董小京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311481102.5A priority Critical patent/CN117201930B/en
Publication of CN117201930A publication Critical patent/CN117201930A/en
Application granted granted Critical
Publication of CN117201930B publication Critical patent/CN117201930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses a photographing method and electronic equipment, relates to the technical field of image processing, and is applied to the electronic equipment. The electronic device comprises a camera and a display screen. The photographing method comprises the following steps: the electronic equipment displays a preview interface on the display screen, wherein the preview interface comprises a preview image of a shooting picture acquired by the camera. The electronic device identifies whether a local motion region exists in the photographed picture based on the preview image. When the electronic equipment receives shooting operation of a user, if a local motion area exists in a shooting picture, acquiring a first image of the shooting picture with short exposure time, and acquiring a second image of the shooting picture with normal exposure time. The local motion area in the first image is clearer, the overall image quality of the second image is higher, the target image is acquired based on the local motion area of the first image and the second image, and the target image can meet the requirements of clear local motion area and high overall image quality.

Description

Photographing method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a photographing method and electronic equipment.
Background
As the user's dependence on the photographing function of electronic devices increases, the user's demand for high image quality and high resolution increases. The lens of the electronic equipment is subject to evolution from millions of pixels to tens of millions of pixels, from a fixed focus lens to a zoom lens and from manual focusing to automatic focusing, so that the shooting requirements of users on higher resolution, higher image quality and automatic zooming can be met.
The current electronic device can shoot the global static element or the global dynamic element based on higher pixels and shooting capabilities such as automatic zooming and the like. However, for a scene including both static elements and dynamic elements in a shot picture, the quality of an image shot by the existing electronic equipment is low, and the problem of partial image blurring exists.
Disclosure of Invention
The embodiment of the application provides a photographing method and electronic equipment, and a photographed target image can meet the requirements of clear local movement areas and high overall image quality.
In order to achieve the above object, the following technical solution is adopted in the embodiments of the present application.
In a first aspect, a photographing method is provided, applied to an electronic device, where the electronic device includes a camera and a display screen, and includes:
The electronic equipment displays a preview interface through a display screen, wherein the preview interface comprises a shooting picture acquired by a camera.
The electronic device determines whether a local motion area exists in the photographed picture based on the preview image.
When the electronic equipment receives shooting operation of a user, if a local motion area exists in a shooting picture, the electronic equipment acquires a first image of the shooting picture with a first exposure time length and acquires a second image of the shooting picture with a second exposure time length.
The electronic device acquires a target image of a shooting picture based on a first local image and a second image corresponding to the local motion area in the first image.
The first exposure time period is smaller than the second exposure time period. Illustratively, the first image is an image acquired with a short exposure time period, and the second image is an image acquired with a normal exposure time period. The second exposure time period may be a default normal exposure time period preset by the electronic device, where the normal exposure time period is often used for shooting a still picture.
In the application, the electronic equipment determines that the local motion area exists in the shooting picture, the first image can be acquired with short exposure time, the second image can be acquired with normal exposure time, the local motion area in the first image is clearer, the overall image quality of the second image is higher, the target image is acquired based on the local motion area of the first image and the second image, and the target image can meet the requirements of clear local motion area and high overall image quality.
In a possible implementation manner of the first aspect, the determining, by the electronic device, whether a local motion area exists in the captured image based on the preview image includes:
the electronic device acquires a first preview image and a second preview image of a shooting picture. The first preview image and the second preview image are adjacent image frames.
The electronic equipment acquires a difference image based on the gray value of a first pixel in the first preview image and the gray value of a corresponding second pixel in the second preview image, and if the gray value of the difference image is larger than the number of pixels with a preset threshold value, the number of pixels is in a preset numerical range, and a local motion area in a shooting picture is determined.
In the application, the processor can acquire the difference images of the adjacent preview images of the shooting picture, and identify the moving pixel points and the background pixel points based on the binarized images of the difference images, so that whether the local moving area exists in the shooting picture can be effectively determined. And when the local motion area is determined to exist, multi-frame variable exposure shooting can be performed, and a first image and a second image are obtained.
In another possible implementation manner of the first aspect, the method further includes:
The electronic equipment performs connected domain analysis on the region formed by the pixels with the gray level difference value larger than the preset threshold value, at least one connected region formed by the adjacent pixels with the same gray level difference value is obtained, and at least one local motion region corresponding to the connected region is obtained.
In the application, the processor can acquire the difference images of the adjacent preview images of the shooting picture, identify the moving pixel points and the background pixel points based on the binarized images of the difference images, and carry out connected domain analysis based on the moving pixel points, so that whether a local moving region exists in the shooting picture can be effectively determined.
In another possible implementation manner of the first aspect, the method further includes:
the electronic device obtains a target area of the local motion region.
The target area is larger than the local motion area.
Then, the electronic device obtains a first image of the shot frame with a first exposure time period and obtains a second image of the shot frame with a second exposure time period, including:
If the target area is smaller than the preset area threshold, the electronic equipment acquires a first image with a first exposure time length and acquires a second image with a second exposure time length.
In the method, after the moving area is obtained, the electronic device can also judge the area proportion of the moving area to occupy the whole image based on the target area of the moving area, so that the shooting strategy of obtaining the first image with the first exposure time and obtaining the second image with the second exposure time provided by the embodiment is executed under the scene that the local moving area is determined to exist instead of the large-area moving area, and the clear and high-quality image is obtained. Aiming at a scene with a large-area motion area in an image, the electronic equipment directly acquires a target image with short exposure time length so as to keep more motion details.
In another possible implementation manner of the first aspect, the method further includes:
The electronic equipment acquires a first preview local image corresponding to the local motion area of the first preview image and a second preview local image corresponding to the local motion area of the second preview image.
The electronic equipment determines a motion estimation value of a local motion area according to the coordinate difference between a third pixel in the first preview local image and a corresponding fourth pixel in the second preview local image, and the electronic equipment determines a first exposure time according to the motion estimation value.
The larger the motion estimation value is, the shorter the first exposure time is. The motion estimation value may indicate to some extent the movement amplitude and movement speed of the object in the photographed picture, and the greater the movement amplitude and movement speed of the object in the photographed picture, the shorter the exposure time should be, so that the details of the motion of the moving object can be more accurately obtained.
In the application, the motion estimation value is determined according to the coordinate difference value of the motion area of the adjacent image frames, so that the first exposure time length corresponding to the short exposure can be determined based on the motion estimation value, and the image corresponding to the local motion area in the first image obtained based on the first exposure time length is clearer.
In another possible implementation manner of the first aspect, the first exposure time is longer than 1/64 of the second exposure time.
In the present application, in order to secure the image quality of the short frame, in some embodiments, the first exposure period is the shortest 1/64 of the second exposure period. I.e. the maximum value of N is 64.
In another possible implementation manner of the first aspect, the determining, by the electronic device, a first exposure duration according to the motion estimation value includes:
The electronic device determines a first amplitude value and a second amplitude value of the local motion region based on a diagonal length of a field angle of the photographed picture, the second amplitude value being smaller than the first amplitude value.
The electronic device takes the product of the motion estimation value and a preset exposure interval as a first parameter.
If the first parameter is greater than or equal to the first amplitude value, the electronic device determines that the first exposure duration is the first duration. If the first parameter is smaller than or equal to the second amplitude value, the electronic device determines that the first exposure duration is the second duration. Wherein the second amplitude value is smaller than the first amplitude value. If the first parameter is greater than the second amplitude value and less than the first amplitude value, the electronic device determines that the first exposure duration is the third duration.
The third time length is longer than the first time length and shorter than the second time length; the first time period, the second time period and the third time period are determined according to different multiples of the second exposure time period.
In the application, the electronic equipment determines the corresponding first exposure time according to the difference of the motion estimation values, so that the first image acquired by short exposure can better keep the motion details of the motion area.
In another possible implementation manner of the first aspect, the electronic device obtains a target image of a shot picture based on a first local image and a second image corresponding to a local motion area of the first image, including:
The electronic device obtains a first gray scale image of the first image and a second gray scale image of the second image.
And the electronic equipment performs brightness alignment on the first gray level image and the second gray level image to obtain a first gray level image with the aligned brightness.
The electronic equipment performs image registration on the first gray level image and the second gray level image with the aligned brightness to obtain a homography matrix; the homography matrix is used for representing the corresponding relation between each pixel of the first gray level image and each pixel of the second gray level image after brightness alignment.
The electronic equipment carries out affine transformation on the first image based on the homography matrix to obtain a first local image of the first image; the electronic device acquires a target image based on the first partial image and the second image.
In the application, aiming at the local motion area, the brightness of the first image obtained by short exposure is aligned to the second image obtained by normal exposure, the first local image is cut and intercepted according to the coordinates of the local motion area, and the first local image is used for replacing the second local image in the second image, so that the definition and the dynamic range of the whole image are ensured, and the condition of motion blur is avoided through short exposure. Based on the photographing method provided by the embodiment, low image film forming rate caused by local motion can be effectively reduced, and the whole image quality of an algorithm is ensured, so that the experience of a user is improved. The problems of motion blur or image quality reduction caused by local motion in a picture during photographing are solved, and photographing experience of a user is improved.
In another possible implementation manner of the first aspect, the electronic device performs brightness alignment on the first gray-scale image and the second gray-scale image to obtain a first gray-scale image with aligned brightness, including:
The electronic device obtains the ratio of the first exposure time length to the second exposure time length. The electronic equipment multiplies the gray value of each pixel in the first gray image by the ratio to obtain the target gray value of each pixel, and obtains the first gray image with aligned brightness based on the target gray value of each pixel.
In the application, the electronic equipment performs brightness alignment of the first gray level image based on the ratio, so that the brightness values of the first gray level image and the second gray level image after brightness alignment are in the same range, and more accurate image registration operation is performed.
In another possible implementation manner of the first aspect, the electronic device performs image registration on the first gray scale image and the second gray scale image after the brightness alignment, to obtain a homography matrix, including:
And the electronic equipment performs histogram equalization processing on the first gray level image and the second gray level image with the aligned brightness to obtain a first equalization image corresponding to the first gray level image and a second equalization image corresponding to the second gray level image with the aligned brightness. And the electronic equipment performs image registration on the first balanced image and the second balanced image to acquire a homography matrix.
In the application, the histogram equalization is carried out on the first gray level image and the second gray level image, the gray level value of the gray level image is restored to be within the full range of the gray level value, the image registration can be more accurately carried out, and the obtained homography matrix is more accurate.
In another possible implementation manner of the first aspect, the electronic device performs affine transformation on the first image based on the homography matrix, and obtains a first local image of the first image, including:
and the electronic equipment performs brightness alignment on the first image and the second image to obtain a first image with the aligned brightness.
And the electronic equipment carries out affine transformation on the first image with the brightness aligned based on the homography matrix to acquire a first local image.
Wherein the affine transformation comprises at least one operation of rotation, translation, scaling, flipping.
In the application, the first image and the second image are subjected to brightness alignment, so that the brightness values of the first image and the second image after brightness alignment are in the same range, and the affine transformation operation is more accurate.
In another possible implementation manner of the first aspect, the electronic device obtains the target image based on the first partial image and the second image, including:
The electronic device replaces the second partial image of the second image with the first partial image, and the target image is obtained.
In the application, a processor cuts a first partial image in a first image, replaces the first partial image into a second image, and replaces the second partial image with the first partial image to obtain a target image. The target image not only ensures that the quality of the whole image can reach the level of not carrying out the exposure reduction, but also solves the problem of image blurring corresponding to the local motion area by fusing the local motion area with short exposure.
In another possible implementation manner of the first aspect, the photographing mode of the electronic device includes a multi-frame variable exposure photographing mode, and the method further includes:
and if the local movement area exists in the shooting picture, the electronic equipment displays a guide control on the preview interface.
The guiding control is used for representing whether to start a multi-frame variable exposure shooting mode.
Then, the electronic device obtains a first image of the photographing picture with a first exposure time period and obtains a second image of the photographing picture with a second exposure time period, including:
and the electronic equipment responds to the touch operation of the user on the target control, acquires a first image with a first exposure time length and acquires a second image with a second exposure time length.
According to the application, the interaction with the user is increased by displaying the guide information, and the multi-frame exposure-changing shooting mode is further confirmed to be started for shooting by the touch operation of the user, so that the acquired target image can meet the requirements of the user, and the shooting experience of the user is improved.
In a second aspect, an electronic device is provided that includes a memory, a camera, a display screen, and one or more processors; the camera, the memory and the display screen are coupled with the processor; the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of the first aspects described above.
In a third aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on an electronic device, cause the electronic device to perform the method of any of the first aspects described above.
In a fourth aspect, there is provided a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method of any of the first aspects above.
In a fifth aspect, an embodiment of the application provides a chip comprising a processor for invoking a computer program in memory to perform a method as in the first aspect.
It will be appreciated that the advantages achieved by the electronic device according to the second aspect, the computer readable storage medium according to the third aspect, the computer program product according to the fourth aspect, and the chip according to the fifth aspect provided above may refer to the advantages in any one of the possible designs of the first aspect and the second aspect, and will not be described herein again.
Drawings
Fig. 1 is a schematic diagram of a shot picture including static elements and dynamic elements according to an embodiment of the present application;
Fig. 2 is a schematic illustration of a captured image according to an embodiment of the present application;
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a software structural block diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic flow chart of a photographing method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a preview interface of a camera application of a mobile phone according to an embodiment of the present application;
Fig. 7 is an interaction schematic diagram of each module of an electronic device in a photographing method according to an embodiment of the present application;
fig. 8 is a flowchart of another photographing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram of connected domains of a binarized image according to an embodiment of the present application;
fig. 10 is a flowchart of another photographing method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of acquiring a target area of a motion region according to an embodiment of the present application;
FIG. 12 is a schematic diagram of calculating a motion estimation value of a motion region according to an embodiment of the present application;
fig. 13 is a flowchart of another photographing method according to an embodiment of the present application;
Fig. 14 is a flowchart of another photographing method according to an embodiment of the present application;
Fig. 15 is a flowchart of another photographing method according to an embodiment of the present application;
Fig. 16 is a schematic diagram of a preview interface of a mobile phone camera application including a guide control according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a preview interface of a mobile phone camera application including a prompt control according to an embodiment of the present application;
Fig. 18 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
In the description of embodiments of the present application, the terminology used in the embodiments below is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise. The terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The lens of the electronic equipment is subject to evolution from millions of pixels to tens of millions of pixels, from a fixed focus lens to a zoom lens and from manual focusing to automatic focusing, so that the shooting requirements of users on higher resolution, higher image quality and automatic zooming can be met. The current electronic device can shoot the global static element or the global dynamic element based on higher pixels and shooting capabilities such as automatic zooming and the like.
When shooting a global static element or a global dynamic element, the electronic device can adopt different exposure time lengths for shooting. The exposure time period refers to a time interval from opening to closing of the shutter. The exposure time can also be regarded as light-in time, and to a certain extent, the longer the exposure time (the longer the light-in time), the larger the light-in amount, the brighter the photographed image, and the higher the image quality of the final image obtained by the image quality processing and the noise reduction processing of the image.
There is no particular requirement for the exposure time when shooting global static elements. The exposure time for shooting the global static element is generally a default exposure time, the default exposure time is relatively long, and when the global static element is shot, an image with higher image quality can be obtained. The exposure time for shooting the global static element can also be determined according to the light degree of the shooting scene, for example, in a cloudy scene, the default exposure time can be properly prolonged, and the light incoming quantity can be increased, so that the image quality can be improved. If the scene is a large sunny scene, the default exposure time can be properly shortened, and the light incoming quantity is reduced so as to ensure the image quality.
When the global dynamic element is shot, the exposure time length needs to be set to be shorter, and the short exposure can freeze the motion of the dynamic element and keep the details of the dynamic element. When the existing electronic equipment shoots, the default exposure time is the exposure time suitable for shooting the global static element. If a user needs to shoot a dynamic element, the user may, for example, start a motion mode of a camera of the electronic device, and shoot with a motion short exposure in the motion mode to obtain a clearer image.
In an actual scene, however, a still element and a dynamic element are often included in a photographed image. For example, the shot frame includes a person standing in front of a building to swing his hand, where the torso portion of the person and the building may be considered as static elements in the shot frame, and the hand the person swings on is a dynamic element in the shot frame. Referring to fig. 1 (a), a person standing behind a table and in front of a window who swings his or her hand is included in a photographing screen, and the body of the person, the window behind the person, and the table in front of the person in the photographing screen are all static elements, and the swing hand is a dynamic element. Referring to fig. 1 (b), a cat moving on a lawn is included in a photographed picture, wherein the lawn in the photographed picture is a static element and the moving cat is a dynamic element.
If the shooting scene comprising static elements and dynamic elements in the shooting picture is not subjected to motion short exposure processing, shooting is carried out with the default exposure time for shooting the global shooting static elements, and as the default exposure time is relatively long, the dynamic elements in the shooting picture cannot capture the motion details of the dynamic elements, and finally the dynamic element part in the shot image is blurred. Referring to fig. 2, fig. 2 gives a schematic illustration of a captured image. The camera in the existing electronic device shoots the shooting picture in fig. 1 (a) with default exposure time, and the problem of image blurring occurs in the local movement area (waving hand) in the obtained image.
If the motion short exposure process is used, dynamic elements exist in a certain shooting picture, automatic Exposure (AE) is reduced, and the quality of the whole image finally obtained by shooting is reduced due to various factors such as small light incoming quantity of short exposure duration. Even the problem of image dynamic range loss occurs in high dynamic range imaging (HIGH DYNAMIC RANGE IMAGING, HDR) scenes.
The embodiment of the application provides a photographing method, which can solve the problem that a clear high-quality image cannot be obtained in a scene of a photographed picture comprising static elements and dynamic elements in the prior art. In the embodiment of the application, after the electronic equipment determines that the local motion area exists in the shooting picture, the first image and the second image of the shooting picture can be acquired with different exposure time periods when the shooting operation of the user is received. For example, a first image is acquired with a short exposure time period and a second image is acquired with a normal exposure time period. A target image corresponding to the photographed picture is acquired based on the first partial image and the second image corresponding to the partial motion region of the first image. The first image obtained by short exposure time can ensure that the image corresponding to the local motion area is clear, the second image obtained by normal exposure time can ensure that the image quality is higher, and the electronic equipment can meet the requirements of clear local motion area and high overall image quality based on the target image obtained by the first local image and the second image of the first image, thereby improving the photographing experience of a user.
The photographing method provided by the embodiment of the application can be applied to electronic equipment comprising a camera and a display screen. By way of example, the electronic device may be a portable computer (e.g., a mobile phone), a tablet computer, a notebook computer, a wearable electronic device (e.g., a smart watch), an augmented reality (augmented reality, AR) \virtual reality (VR) device, etc., and the specific form of the electronic device is not particularly limited in the following embodiments.
Fig. 3 shows a schematic structural diagram of the electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, a camera 193, a display 194, and the like. Wherein the sensor module 180 may include a pressure sensor 180A, a distance sensor 180B, an ambient light sensor 180C, a touch sensor 180D, etc.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a memory, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SERIAL DATA LINE, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (CAMERA SERIAL INTERFACE, CSI), display serial interfaces (DISPLAY SERIAL INTERFACE, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In some embodiments, the electronic device includes an application for taking a photograph, such as a camera application that is self-contained in the system or a third party photographing application. The electronic device displays a preview interface in the display screen 194 in response to a user operation to open the applications.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. The ISP can also carry out algorithm optimization on parameters such as noise, brightness and the like of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In some embodiments, the electronic device can control the exposure time period by adjusting the shutter speed of the photosensitive element. The faster the shutter speed, the shorter the exposure time period. Different exposure time lengths are obtained by adjusting different shutter speeds, so that a first image of a shooting picture is obtained by short exposure time length, and a second image of the shooting picture is obtained by normal exposure time length.
In some embodiments, the electronic device displays a preview image corresponding to the photographed picture of the camera 193 through the display 194. The processor (CPU/GPU) sends the photographing policy to the ISP by if it is determined that there is a local motion region in the photographed picture. Upon receiving a photographing operation by the user, the ISP controls the camera 193 to collect a first image of a photographing picture at a short exposure time period (first exposure time period) and a second image of the photographing picture at a normal exposure time period (second exposure time period) based on a photographing policy. The ISP transmits the acquired first image and second image to the processor. The processor performs image processing based on a first local image and a second image corresponding to the local motion area of the first image based on an image processing algorithm preset in an algorithm library, and obtains a target image of a shooting picture. In some embodiments, the target image may also be displayed in the display screen 194 for a shorter period of time after the processor obtains the target image. For example, the processor sends the target image to the display screen to perform preview display for 1 second, and after the display, the display screen resumes displaying the preview interface. In some embodiments, the processor may also store the target image in an album of the electronic device.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The audio module 170 includes a speaker, a receiver, a microphone, and an earphone interface. The electronic device 100 may implement audio functions through an audio module 170, a speaker, a receiver, a microphone, a headphone interface, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
For example: when the touch operation acts on the camera application icon, a start camera application instruction is executed. Further, the electronic device 100 displays a preview interface on the display 194 after the camera application is started. In this embodiment, the electronic device displays a preview interface, where the preview interface includes a preview image of the photographing screen. A processor (CPU/GPU) may detect whether there is a local motion region in the current captured picture. If the processor (CPU/GPU) determines that the shot image comprises the local motion area, the processor (CPU/GPU) sends a shooting strategy to the ISP. When a touch operation is applied to the photographing button, the camera application transmits a control instruction to the ISP, which controls the camera 193 to collect a first image of the photographing screen at a short exposure time period based on the photographing policy and a second image of the photographing screen at a normal exposure time period in response to the control instruction. After the ISP acquires the first image and the second image, the first image and the second image are sent to a processor (CPU/GPU), and the processor (CPU/GPU) performs image processing on the first local image and the second image corresponding to the local motion area in the first image, so that a target image of a shooting picture corresponding to the current shooting operation is obtained.
A distance sensor 180B for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180B to achieve fast focus.
The ambient light sensor 180C is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180C may also be used to automatically adjust white balance when taking a photograph.
In some embodiments, the electronic device may determine the first exposure duration and/or the second exposure duration through ambient light data collected by ambient light sensor 180C to further ensure the quality of the captured image.
The touch sensor 180D, also referred to as a "touch panel". The touch sensor 180D may be disposed on the display 194, and the touch sensor 180D and the display 194 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180D is used to detect a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180D may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 4 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present invention. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the android system is divided into an application layer, an application framework layer (application framework), a hardware abstraction layer (hardware abstraction layer, HAL), and a hardware layer.
The application layer may include a series of application packages. As shown in fig. 4, the application package may include an application program that can be used for photographing by a camera/third party application or the like, and an application program that includes a gallery for storing photographed images.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 4, the application framework layer may be used for a camera module that provides an API for camera/third party applications.
In some embodiments, the application framework layer may also include modules for implementing other functions. Such as a window manager, content provider, view system, resource manager, etc.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The HAL layer includes a camera HAL and an algorithm library. The camera HAL provides an interface for unified management of cameras or other underlying devices that the camera module can invoke.
The algorithm library provides the CPU or GPU at the hardware layer with algorithm logic for identifying whether a local motion area exists in the photographed picture and performing image processing based on the first image and the second image in the photographing method provided in the embodiment.
The hardware layer comprises ISP, camera, display screen and CPU/GPU.
In this embodiment, when the user opens the camera application, the camera application issues a control instruction to the ISP through the camera module and the camera HAL, and the ISP controls the camera to acquire a shooting picture.
The ISP may return the shot to the camera application, which displays the shot preview image corresponding to the shot on the display screen.
The ISP may also send the shot to the CPU/GPU.
The CPU/GPU identifies whether a local motion area exists in the shot picture or not based on an algorithm provided by an algorithm library. And if the local motion area exists in the shooting picture, sending a shooting strategy to the ISP. The photographing strategy includes, for example, acquiring a first image with a first exposure time period and acquiring a second image with a second exposure time period.
When the ISP receives a photographing instruction issued by the camera application through the camera module and the camera HAL, namely, when the electronic equipment receives photographing operation of a user. If the photographing strategy exists in the ISP, the ISP controls the camera to acquire a first image in a first exposure time period and acquire a second image in a second exposure time period based on the photographing strategy.
After acquiring the first image and the second image, the ISP sends the first image and the second image to the CPU/GPU.
And the CPU/GPU performs image processing based on the first image and the second image based on an algorithm provided by the algorithm library, and acquires a target image of the shot picture.
In some embodiments, after the CPU/GPU acquires the target image, the target image may be returned to the camera application through the camera HAL, the camera module, and the camera application controls the display screen to display the target image through the camera HAL, the camera module. Or the CPU/GPU can send the target image to the display screen for display after acquiring the target image.
In some embodiments, after the CPU/GPU acquires the target image, the target image may also be stored in a gallery application.
In fig. 4, the instruction flow is indicated by the flow of the dotted arrow, and the data flow is indicated by the flow of the solid arrow.
In some embodiments, a kernel layer is further included between the HAL layer and the hardware layer, where the kernel layer includes drivers corresponding to devices of the hardware layer, and is used to enable the devices and the instruction issuing and data uploading corresponding to the devices. For example, the kernel layer includes a camera driver, a display driver, a sensor driver, and the like.
The embodiment provides a photographing method, which can collect a first image and a second image of a photographing picture with different exposure time lengths when a local motion area exists in the photographing picture is identified, so that image processing is performed based on the first image and the second image, and a target image is obtained. Taking the electronic device as a mobile phone, a scene of photographing through a camera application is illustrated. Referring to fig. 5, fig. 5 shows a flowchart of a photographing method, including:
s201, the electronic equipment displays a preview interface on a display screen.
In some embodiments, the photographing method further comprises the electronic device receiving an operation of a user to launch the camera application before the electronic device displays the photographing preview interface on the display screen.
That is, after the user clicks on an icon of the camera application, or activates the camera application by voice or other means, the electronic device displays an activation interface of the camera application, where the preview interface is an activation interface of the camera application.
In some embodiments, the application for capturing the image of the object may also be a third party application. The preview interface is a shooting interface of the third party application.
The preview interface comprises a preview image of a continuous multi-frame shooting picture obtained by the camera according to a default exposure time. Illustratively, reference is made to the preview interface of the camera application of the handset shown in fig. 5. The preview interface 601 includes a preview image 602.
In some embodiments, the details of the preview interface may be as illustrated with reference to FIG. 6. The preview interface 601 may further include a photographing control 603, a conversion camera control 604, a gallery control 605, different function controls 606, different mode controls 607, and the like. Among other things, different functionality controls 606 include photo, video, slow motion, professional, and other functionality controls. The different mode controls 607 include flash, AI take a picture, set, etc. mode controls.
S202, the electronic device determines whether a local motion area exists in a shooting picture based on the preview image.
In this embodiment, after the electronic device acquires the preview image, the electronic device may perform detection of whether the local motion area exists in the shooting screen based on the preview image while displaying the preview image. For example, the actual shot screen corresponding to the preview image in fig. 5 includes a person who swings his/her hand in front of the window. The electronic device may acquire two or more adjacent preview images, determine whether there is a region with a changed position in the shot image based on the position change of each pixel in the adjacent preview images, and the adjacent preview images have a plurality of pixels with changed positions, where the regions where the pixels are located may be regarded as motion regions.
In some embodiments, the position coordinates of the pixels of the adjacent preset images may be input to the geometry shader, so as to obtain vertex images of the pixels corresponding to each preset image, and determine whether a motion area exists based on the coordinate change of the vertices between the adjacent vertex images.
In some embodiments, the motion region is considered to be a local motion region if the corresponding area of the motion region is smaller than the ratio of the predetermined total image area. For example, the ratio is 50%. If the ratio of the corresponding area of the motion area to the whole image area is less than 50%, the motion area is considered to belong to the local motion area. If the ratio of the corresponding area of the motion area to the whole image area is greater than or equal to 50%, the area of the motion area is considered to be relatively large, and the motion area is not a local motion area.
S203, when the electronic equipment receives shooting operation of a user, if a local motion area exists in a shooting picture, the electronic equipment acquires a first image of the shooting picture with a first exposure time period and acquires a second image of the shooting picture with a second exposure time period.
The first exposure time period is smaller than the second exposure time period.
In this embodiment, the second exposure duration may be a default exposure duration preset by the electronic device, where the default exposure duration is an exposure duration that is often used for shooting a still-shooting picture. Illustratively, the default exposure time period may be 3 seconds. The first exposure time period is a time period of less than 3 seconds. The specific value of the first exposure period may be further determined according to the motion amplitude of the local motion region. To better capture motion details, the more intense the localized motion regions are, the shorter the first exposure time should be.
With the first image acquired by short exposure, the light incoming amount is small due to the short exposure time, so the brightness is low, and reference is made to the illustration of the first image in fig. 5. Since the short exposure is easier to grasp the motion details, the first image has a clearer local image corresponding to the local motion area. Accordingly, the second image acquired by normal exposure has normal light incoming quantity, normal brightness and higher image quality, and reference can be made to the illustration of the second image in fig. 5. The local image corresponding to the local motion area in the second image is blurred.
When the electronic device receives a shooting operation of a user, that is, when the user clicks the shooting control 603 on a preview interface of the camera application or the user shoots through voice input, if the electronic device determines that a local motion area exists in a shooting picture, the electronic device considers that multi-frame variable exposure shooting is needed. That is, the first image is taken at a shorter exposure time period and the second image is taken at a normal exposure time period. Wherein the first image is shot with a short exposure time length, and a frame of image can be shot. The second image is taken at a normal time period, and a plurality of frames of the second image may be taken.
In some embodiments, the preview image is a continuous multi-frame image acquired by the camera at a normal exposure time. The normal exposure period is the second exposure period.
In some embodiments, in order to reduce data processing of the electronic device, a problem that a difference between a motion area of the second image and a motion area of the first image is relatively large is considered because a local motion area in a photographed image corresponding to the first image may be changed due to a time interval of variable exposure. In this embodiment, the second image may also be one frame image with better sharpness selected from the last frames of the consecutive multi-frame preview images (the last 1 to 3 frames in the preview are recommended to be selected). The obtained second image still has the corresponding exposure time length of the second exposure time length, but reduces the time for the electronic equipment to obtain the second image, and further ensures that the local motion areas of the first image and the second image are generally consistent or have little difference, so that the photographing method provided by the embodiment is used for carrying out image processing on the first image and the second image, and a more accurate target image can be obtained.
S204, the electronic equipment acquires a target image of the shooting picture based on the first local image and the second image corresponding to the local motion area of the first image.
Here, referring to a region 2031 shown in fig. 5, the first partial image is an image of the first image corresponding to a partial motion region of the photographed picture.
After the electronic device obtains the first image and at least one second image, the first image is a short exposure acquired image, and a first local image corresponding to the local motion area in the first image is clearer. The second image is an image obtained through normal exposure, and the image quality of the second image is higher. And performing image fusion processing on the first partial image and the second image based on the first image to obtain a target image of the shot picture, wherein the image corresponding to the partial motion area of the target image is clear, and the quality of the whole image is higher.
In some embodiments, the electronic device may replace the first partial image of the first image with the second partial image corresponding to the partial motion area in the second image, so as to solve the problem of blurring of the second partial image in the second image, thereby obtaining the target image with a higher image quality of the clear whole partial image.
In some embodiments, the electronic device may further perform image processing on the second local image in the second image based on the first local image of the first image, for example, perform image restoration on the second local image with the first local image as a reference, so as to solve the problem of blurring of the second local image in the second image, thereby obtaining a target image with a clear whole image of the local image and higher quality.
The embodiment of the application provides a photographing method, wherein after an electronic device determines that a local motion area exists in a photographing picture, when photographing operation of a user is received, a first image and a second image of the photographing picture can be obtained in different exposure time periods. For example, a first image is acquired with a short exposure time period and a second image is acquired with a normal exposure time period. A target image corresponding to the photographed picture is acquired based on the first partial image and the second image corresponding to the partial motion region of the first image. The first image obtained by short exposure time can ensure that the image corresponding to the local motion area is clear, the second image obtained by normal exposure time can ensure that the image quality is higher, the electronic equipment can meet the requirements of clear local motion area and high overall image quality based on the target image obtained by the first local image and the second image of the first image, and for the scene of which the base comprises dynamic elements and static elements in the shot picture, the method improved by the embodiment can obtain the target image with clear dynamic elements and high overall image quality, meets the user requirements and improves the shooting experience of the user.
The photographing method is provided in combination with the embodiment of fig. 5, and each module in the electronic device shown in fig. 4, for example, a camera application, a processor, an ISP, a camera and a display screen, and the photographing method provided in this embodiment is described in a form of interaction between each module, with reference to fig. 7, and includes:
s301, the camera application receives camera starting operation of a user and sends a first control instruction to the ISP.
In some embodiments, the user opens a camera application that sends a first control instruction to the ISP in response to a user operation. The first control instruction is used for instructing the ISP to control the camera to collect data of the shooting picture and obtaining a preview image of the shooting picture.
S302, the ISP receives a control instruction and controls the camera to acquire data of a shot picture.
In some embodiments, the ISP receives the control instruction, and controls the camera to collect data of the shot picture, so as to obtain image data of the shot picture.
S303, the ISP acquires a preview image based on the data of the shot picture.
In some embodiments, the ISP performs conventional image processing based on the image data from which the captured image was obtained, resulting in a preview image of the captured image.
S304, the ISP returns the preview image to the camera application.
The camera application acquires the preview image and then displays the preview image, namely:
s305, the camera application displays a preview interface on the display screen.
The preview interface includes a preview image of a shot picture obtained by a camera, a shooting control, a conversion camera control, a gallery control, different function controls, different mode controls, and the like, and can be shown in fig. 6.
Displaying the preview image of the shot picture on the preview interface can enable the user to more intuitively determine the information such as the picture structure to be shot.
In some embodiments, the ISP may also obtain the preview image and send the preview image directly to the processor.
S306, the processor determines whether a local motion area exists in the shooting picture based on the preview image.
In some embodiments, the processor may determine whether the shot frame has a local motion region based on the coordinate positions of the respective pixels in the plurality of adjacent preview images.
In some embodiments, if the processor determines that there is a local motion region in the captured picture, the processor may send a preset photographing policy to the ISP that characterizes the images of the captured picture that need to be captured at different exposure durations.
In some embodiments, if the processor determines that there is no local motion region in the captured picture, no transmission of any data or instructions may be made.
S307, the camera application receives shooting operation of the user and sends a second control instruction to the ISP.
The second control instruction is used for instructing the ISP to control the camera to acquire data of a shooting picture and acquire a target image.
S308, judging whether a preset photographing strategy exists or not by the ISP when the ISP receives the second control instruction. If the preset photographing policy exists, S309 is executed; if the preset photographing policy does not exist, S313 is performed.
Since the processor recognizes whether the shot picture has a local motion area, the processor needs to transmit the recognition result to the ISP. When the ISP receives a second control instruction of the camera application, the camera can be controlled to acquire corresponding data based on a corresponding identification result.
In some embodiments, if the ISP determines that the preset photographing policy exists, the ISP considers that the recognition result is that a local motion area exists in the photographed image, so as to control the camera to collect data with different exposures. If the ISP judges that the preset photographing strategy does not exist, the recognition result is considered to be that a local motion area does not exist in a photographing picture, so that the camera is controlled to collect data through normal exposure.
In some embodiments, the processor may also send a third control instruction to the ISP, and if the ISP receives the third control instruction, control the camera to collect data at different exposures. If the ISP does not receive the third control instruction within the preset time, the camera is controlled to acquire data through normal exposure.
In some embodiments, the processor may also send a photographing identifier to the ISP, where the photographing identifier indicates that the local motion area exists in the photographed image as a result of the identification when the photographing identifier is the first value, so that the ISP controls the camera to collect data with different exposures. And when the photographing identification is a second value, indicating that the identification result is that a local motion area does not exist in the photographing picture, and controlling the camera by the ISP to acquire data through normal exposure.
In some embodiments, the processor may also send the first exposure duration and the second exposure duration to the ISP. For example, the shooting strategy comprises a first exposure time and a second exposure time; for example, the third control instruction carries the first exposure duration and the second exposure duration, and so on.
S309, the ISP controls the camera to acquire first shooting data with a first exposure time period and acquire second shooting data with a second exposure time period.
The ISP controls the camera to acquire first shooting data in a first exposure time period and acquire second shooting data in a second exposure time period.
The second shot data may be at least one group of shot data.
S310, the ISP acquires a first image based on the first shooting data and acquires a second image based on the second shooting data.
The ISP performs conventional image processing based on the first photographing data to acquire a first image. The ISP performs conventional image processing based on at least one set of second photographing data to acquire at least one second image.
S311, the ISP sends the first image and the second image to the processor.
S312, the processor acquires a target image of the shooting picture according to the first local image and the second image corresponding to the local motion area of the first image.
Reference is made to the above-described embodiment S204.
In some embodiments, after performing S308:
s313, the ISP controls the camera to acquire third shooting data in the second exposure time.
The second exposure time is a default exposure time (normal exposure time) of the electronic device, and the quality of the image which can be obtained based on the default exposure time is higher.
If the shooting area does not have a local movement area, the ISP may not receive a preset shooting strategy sent by the processor; or a fourth control instruction for indicating normal shooting is received; or receiving a photographing identification for indicating normal photographing, in which case, the ISP controls the camera to acquire the third photographing data at the normal exposure time without lowering the exposure to capture dynamic details of the moving area.
S314, the ISP acquires a target image of the photographing screen based on the third photographing data.
The ISP performs conventional image processing based on the third photographing data to acquire a target image. The shooting area does not have a local motion area, the dynamic details of the motion area are captured without reducing exposure, and the image quality of a target image obtained by normal exposure time is higher.
In some embodiments, the ISP may obtain preview images of at least two adjacent frames and send the preview images of the at least two adjacent frames to the processor. The processor determines whether a local motion area exists in the photographed picture based on the acquired preview images of at least two adjacent frames. In some embodiments, the determining, by the electronic device (processor) in step S202, whether the local motion area exists in the photographed image specifically includes, referring to fig. 8:
s2021, the processor acquires a first preview image and a second preview image of the photographed screen.
The first preview image and the second preview image are adjacent image frames.
In this embodiment, the ISP controls the camera to continuously collect the data of the shot frames in normal exposure time, so as to form continuous multi-frame preview images, and the continuous multi-frame preview images are displayed on the preview interface of the camera application. At the same time, the ISP may store successive multi-frame preview images to the buffer.
The processor may obtain two consecutive frames of preview images from the buffer. For example, the first preview image may be an N-1 th frame preview image and the second preview image may be an N-th frame preview image. The first preview image and the second preview image shown in fig. 8 have a difference in hand position due to the waving of the human hand.
S2022, the processor obtains a difference image based on a gray level difference between the gray level value of the first pixel in the first preview image and the gray level value of the corresponding second pixel in the second preview image.
The processor obtains the adjacent first preview image and second preview image, and can use a frame difference method to compare the nth frame preview image with the N-1 th frame preview image to determine whether there is a shift in pixel location between adjacent image frames.
In some embodiments, when there is a moving object in successive multi-frame preview images, there is a difference in gray level between adjacent image frames. For example, the difference image (difference frame) may be obtained by taking the absolute value of the difference in gray-scale value of the pixel point between the nth frame preview image and the N-1 frame preview image, referring to the difference image in fig. 8. The difference image may be used to represent a gray scale difference between the nth frame preview image and the N-1 th frame preview image. Here, the difference in gray values of pixels (background pixels) that a stationary object shows on the differential image is 0, whereas at the outline of a moving object, particularly a moving object, there is a gray change, and thus the difference in gray values of pixels (moving pixels) at the outline is not 0. Based on the principle, when the absolute value of the difference value of the gray values of the pixel points exceeds a certain threshold value, the pixel points can be judged to be motion, and detection of the motion area is realized based on the motion pixel points.
The calculation process of obtaining the difference image between the nth frame preview image and the N-1 th frame preview image may be expressed as:
Wherein is the gray value of the pixel (x, y) of the N-th frame preview image, and/() is the gray value of the pixel of the N-1-th frame preview image.
In some embodiments, the processor may also perform noise reduction processing on the first preview image and the second preview image, for example, perform gaussian blur on the first preview image and the second preview image to eliminate noise interference, so that the difference image is more accurate.
S2023, if the proportion of pixels in the difference image with gray level difference value larger than a preset threshold value is in a preset ratio range, determining that a local motion area exists in the shooting picture.
In this embodiment, the processor may acquire pixels in the differential image having a gray level difference greater than a preset threshold, and refer to white pixels in a frame-selected region in the binarized image of the differential image in fig. 8. For example, the maximum gray value is 255, and the preset threshold may be 55. The processor may acquire a pixel with a gray value difference of 55, and determine that a local motion area exists in the photographed image if a proportion of the pixel with the gray value difference of 55 to the whole image is within a preset ratio range. For example, if the preset ratio range is [2%,28% ], and the proportion of the pixels with gray scale difference greater than 55 to the whole image is 25%, then the local motion area is considered to exist in the photographed image.
In this embodiment, in order to more accurately perform the identification determination of the local motion region, the processor may perform binarization processing on the differential image.
Illustratively, the preset threshold is set to T, where T may be determined based on a maximum value of gray values of pixels of the preview image. For example, for an 8bit image, the maximum value of the gray value is 255.T may be a set multiple of the maximum value of the gray value. For example, T is 0.2 times (0.2×255) the maximum value of the gradation value. In some embodiments, the set multiple may be other values, such as 0.3, 0.4, etc., as determined by the actual situation.
The processor performs binarization processing on each pixel in the differential image, sets the gray value of the pixel with the gray value of the pixel larger than a preset threshold value T in the differential image as the maximum value (255) of the gray value, and sets the gray value of the pixel with the gray value of the pixel smaller than or equal to T in the differential image as 0. That is, the image obtained by binarizing the differential image can be obtained based on the following formula.
In the binarized image of the differential image, the point with the gray value of 255 is a motion pixel point, and the point with the gray value of 0 is a background pixel point.
In some embodiments, the processor may obtain the number of pixels (motion pixels) having a gray value of 255, and calculate the proportion of these motion pixels to the entire image. For example, the ratio is within a preset ratio range, for example, the preset ratio range may be [3%,25% ]. That is, when the proportion of the moving pixel points to the whole image is greater than or equal to 3% and less than or equal to 25%, it is determined that a local moving region exists in the photographed picture. In some embodiments, the preset ratio range may be set to other ranges, for example, [5%,30% ], and the preset ratio range may be determined according to the actual situation.
In some embodiments, the processor performs connected domain analysis on a region formed by pixels with gray differences greater than a preset threshold value, and obtains at least one connected region formed by adjacent pixels with the same gray differences, so as to obtain at least one local motion region corresponding to the connected region.
The connected domain analysis refers to that for a binarized image, adjacent pixels with the same pixel value are found out and marked to form a connected region, so that a local motion region formed by adjacent motion pixel points can be determined.
In the embodiment of the application, the processor can acquire the difference images of the adjacent preview images of the shot picture, identify the motion pixel points and the background pixel points based on the binarized images of the difference images, and perform connected domain analysis based on the motion pixel points, so that whether a local motion area exists in the shot picture can be effectively determined. And when the local motion area is determined to exist, multi-frame variable exposure shooting can be performed, and a first image and a second image are obtained.
In some embodiments, referring to fig. 9, fig. 9 shows a schematic diagram of connected domains of a binarized image. And carrying out connected domain analysis on the binarized image to obtain a connected region formed by adjacent pixels with the same gray level difference value. As in fig. 9 (a), there is a dashed box in the binarized image. As in (b) of fig. 9, there are two dashed boxes in the binarized image.
In the binarized image shown in fig. 9, the connected region (the dotted-line frame selection region) formed by adjacent pixels of the same gradation difference value is a local motion region in the photographed picture.
In some embodiments, the processor may further mark adjacent pixels of the same gray scale difference value in the binarized image to obtain one or more pixel collection areas. The pixel collection area in the binarized image is the local motion area in the shooting picture.
In some embodiments, the processor obtains a moving pixel point, performs connected domain analysis, and obtains a region formed by the moving pixel point, and further, the processor may further perform further identification verification on the region, and referring to fig. 10, the photographing method provided in this embodiment further includes:
s2024, the processor acquires a target area of the local motion region.
Wherein the target area is larger than the actual area of the local motion area.
In some embodiments, the processor obtains a local motion region formed by the motion pixels based on a binarized image of the difference image. Reference is made to fig. 11. Fig. 11 (a) shows an original image of a certain frame, and fig. 11 (b) shows a difference image calculated based on an original image of a certain frame and an original image of a previous frame or a next frame adjacent to the original image of the certain frame, and is an image obtained by binarizing the difference image. In fig. 11 b, the region formed by the moving pixel is a white region, and the gray value of the pixel in this portion is the maximum value (255) of the gray value. The area formed by the background pixel points is a black area, and the gray value of the pixel points is 0. Fig. 11 (c) shows a frame selection region of the local motion region, and the area of the frame selection region is the target area.
In some embodiments, the processor may construct a frame-selected region having an area larger than the local motion region with the center coordinates of the local motion region as the center or geometric center. For example, the frame selection area should be larger than the local motion area, for example, the frame selection area may be 1.3 times, 2 times, or even more than 2 times the area occupied by the local motion area in consideration of the motion amplitude of the moving object in the local motion area and the delayed out-frame of the preview image frame. Reference is made to the framed area corresponding to the target area marked in fig. 10. Generally, the larger the motion amplitude (the larger the motion estimation value), the larger the frame selection area of the local motion area. The coordinates of the frame selection area of the partial motion area may be expressed as (x, y, width, height), where x and y represent the coordinate positions of the upper left corner of the rectangular frame in the entire image, and width and height represent the length and width of the rectangular frame, respectively.
If a plurality of local motion areas exist in the first image, the sum of the areas of the frame selection areas of all the local motion areas is the target area.
S2025, if the target area is smaller than the preset area threshold, the processor determines that a local motion area exists, sends a preset photographing strategy to the ISP, and instructs the ISP to control the camera to acquire a first image with a first exposure time and acquire a second image with a second exposure time.
In this embodiment, if the target area is less than the preset area threshold, indicating that the motion area is indeed some small portion of the entire image, in which case the processor determines that there is a local motion area. And sending a preset photographing strategy to the ISP, and indicating the ISP to control the camera to acquire a first image in a first exposure time period and acquire a second image in a second exposure time period. The preset area threshold may be 50% of the entire image area.
S2026, if the target area is greater than or equal to the preset area threshold, the processor does not send a preset photographing strategy to the ISP, and the ISP controls the camera to acquire the target image in the first exposure time.
In the present embodiment, if the target area is greater than or equal to the preset area threshold, it is indicated that the motion area occupies almost the entire image. In this case, the processor may acquire the moving image directly in short exposure without executing the photographing policy corresponding to the photographing method provided in the present embodiment. That is, the processor does not send a preset photographing policy to the ISP, which controls the camera to acquire the target image with the first exposure duration. For example, the preset area threshold may be 50% of the entire image area, and since the motion area is greater than 50% of the entire image area, the ISP may acquire the target image with a short exposure time period, ensuring motion details of the moving object as much as possible.
In this embodiment, after the moving area is obtained, the processor may further determine that the moving area occupies an area ratio of the whole image based on the target area of the moving area, so that the capturing strategy provided in this embodiment for capturing the first image with the first exposure time and capturing the second image with the second exposure time is performed in a scene where it is determined that the local moving area exists instead of the large-area moving area, so as to achieve capturing a clear and high-quality image. For scenes in which there is a large moving area in the image, the processor may instruct the ISP to directly acquire the target image with a short exposure duration to preserve more motion details.
In some embodiments, after determining that the local motion region exists in the shot frame, the processor may calculate a motion estimation value of the local motion region, so as to determine a first exposure time period corresponding to shooting the first image, where the first exposure time period includes:
S401, the processor acquires a first preview local image corresponding to the local motion area of the first preview image and a second preview local image corresponding to the local motion area of the second preview image.
In this embodiment, the processor may acquire the adjacent first preview image and second preview image from the buffer, and acquire the first preview partial image of the first preview image and the second preview partial image of the second preview image based on the above-described manner of detecting the partial motion area.
S402, the processor determines a motion estimation value of the local motion area according to the coordinate difference between a third pixel in the first preview local image and a corresponding fourth pixel in the second preview local image.
In this embodiment, the processor may determine the motion estimation value of the local motion area based on a coordinate difference between a third pixel in a first preview local image of the adjacent first preview image and a fourth pixel in a second preview local image of the second preview image.
For example, the third pixel of the first preview partial image may be the center coordinate of the first preview partial image, the fourth pixel of the second preview partial image may be the center coordinate of the second preview partial image, and the motion estimation value of the partial motion area may be determined based on the center coordinate of the first preview partial image and the center coordinate of the second preview partial image. The difference value between the center coordinates of the first preview partial image and the center coordinates of the second preview partial image is the motion estimation value.
Or in some embodiments, the processor may obtain at least two binarized motion regions directly based on a difference image of a plurality of adjacent first and second preview images. A motion estimation value of the local motion region is determined based on the center coordinates of the at least two binarized motion regions. The difference value of the center coordinates of the two motion areas is the motion estimation value.
Referring to fig. 12, fig. 12 shows a schematic diagram for calculating a motion estimation value of a motion region. For example, the motion estimation value of the motion region may be calculated based on the center coordinates of the motion region in the binarized image. Fig. 12 (a) gives an illustration of a binarized image including a motion region.
The maximum and minimum values of the ordinate are obtained for a motion region, denoted as y max and y min, and the maximum and minimum values of the abscissa x max and x min. Referring to fig. 12 (b), fig. 12 (b) shows a mark point representation of the maximum value of the abscissa/ordinate of a movement region. The center point coordinates (x, y) can be calculated from the following formula:
referring to fig. 12 (c), fig. 12 (c) shows an illustration of a mark point of a center point of a movement region.
In some embodiments, coordinates of center points of two adjacent motion areas are (x 1,y1) and (x 2,y2), respectively, and assuming that a time interval between adjacent preview images is t milliseconds, the calculated motion estimation value v may be expressed as:
In some embodiments, for the case where there are multiple motion regions in one image, the processor may select the motion region with the largest area to calculate the motion value based on the similarity and the motion region area.
In some embodiments, if the center coordinates of the moving area are unchanged, for example, the windmill rotates in the shooting picture, the calculated center coordinates of the moving area are the same between the image frames with different time sequences. In this case, the motion estimation value cannot be calculated using the center coordinates. The first exposure period takes a preset fixed value, for example, an exposure of-4 ev.
In some embodiments, the processor may also perform the calculation of the motion estimation value based on the contour coordinates of the motion region. For example, a difference in coordinates of left boundaries (right boundary/upper boundary/lower boundary) of two adjacent motion regions is calculated as a motion estimation value, and so on.
S403, the processor determines a first exposure time according to the motion estimation value.
The larger the motion estimation value is, the shorter the first exposure time is.
The motion estimation value may indicate to some extent the movement amplitude and movement speed of the object in the photographed picture, and the greater the movement amplitude and movement speed of the object in the photographed picture, the shorter the exposure time should be, so that the details of the motion of the moving object can be more accurately obtained. The first exposure time period may be 1/N of the second exposure time period, and the larger the motion estimation value is, the larger the N value is.
In some embodiments, the first exposure time is longer than 1/64 of the second exposure time.
To ensure image quality for short frames, in some embodiments, the first exposure duration is at least 1/64 of the second exposure duration. I.e. the maximum value of N is 64.
In other embodiments, the processor may preset the correspondence of the motion estimation value to the N value, thereby determining the value of N based on the motion estimation value.
In some embodiments, the processor determines the first exposure duration from the motion estimation value, including:
S4031, the processor determines a first amplitude value of the local motion region based on a diagonal length of a field angle of the photographed picture.
In this embodiment, the processor acquires the diagonal length of the field angle fov of the photographed picture. For example, a distance of 1/q of the diagonal length fov may be set to a first amplitude value of the local motion region, which is also the maximum amplitude value Lmax of the local motion region. Where, for example, q may be 5. The processor may also set the 1/p distance of the diagonal length to a second amplitude value Ln of the local motion zone, the second amplitude value being the median amplitude value of the local motion zone. Where, for example, p may be 10.
S4032, the processor takes the product of the motion estimation value and the preset exposure interval as a first parameter.
The preset exposure interval refers to a time interval between changing exposure durations, which is a preset value t.
S4033, if the first parameter is greater than or equal to the first amplitude value, the processor determines that the first exposure duration is the first duration. If the first parameter is less than or equal to the second amplitude value, the processor determines that the first exposure duration is the second duration. Wherein the second amplitude value is smaller than the first amplitude value. If the first parameter is greater than the second amplitude value and less than the first amplitude value, the processor determines that the first exposure time period is a third time period.
The third time length is longer than the first time length and shorter than the second time length; the first time period, the second time period and the third time period are determined according to different multiples of the second exposure time period.
In this embodiment, the motion estimation value is v, the preset exposure interval is t, and the product v×t of the motion estimation value and the preset exposure interval is calculated as a first parameter, where the first parameter represents the motion amplitude of the motion region within the duration corresponding to the preset exposure interval. The processor determines that the first exposure period includes the following:
The first case is: if v×t > =lmax, the processor sets the first exposure period to the first period. The motion estimation value is larger, the product of the motion estimation value and the preset exposure interval is larger than or equal to the maximum amplitude value, and then the first exposure time length takes the shortest exposure time length. Illustratively, the first duration may be 1/64 of the second exposure duration.
The second case is: if v×t < =ln, then the processor sets the first exposure period to the second period. Here, the motion estimation value is smaller, and the product of the motion estimation value and the preset exposure interval is smaller than or equal to the median amplitude value, so that the first exposure duration can be appropriately longer. Illustratively, the second duration may be 1/16 of the second exposure duration.
The third case is: ln < v×t < Lmax, then the processor sets the first exposure period to a third period. The motion estimation value is not very small, and the product of the motion estimation value and the preset exposure interval is between the median amplitude value and the maximum amplitude value, so that the third duration can be greater than 1/64 of the second exposure duration and less than 1/16 of the second exposure duration.
In some embodiments, the third duration is interpolated between 1/16 and 1/64 of the second exposure duration according to the motion amplitude, and the relationship between the third duration and the median amplitude value and the maximum amplitude value, where the third duration is set to 1/E of the second exposure duration, may be expressed as:
then, E can be expressed as:
in some embodiments, the processor obtains a target image of a photographed picture based on a first partial image and a second image corresponding to a partial motion area of the first image, referring to fig. 13, including:
s2041, the processor acquires a first gray scale image of the first image and a second gray scale image of the second image.
The processor may sum up the red, green, and blue components of each pixel in the first image and then average the sum to obtain a first gray scale image of the first image. The process is similar for the second image.
In some embodiments, the processor may further output the first image to a preset image gray scale processing model to obtain a first gray scale image of the first image. The process is similar for the second image.
S2042, the processor performs brightness alignment on the first gray level image and the second gray level image to obtain a first gray level image with the brightness aligned.
In some embodiments, since the brightness of the first image acquired by short exposure is low, the brightness alignment process may be performed on the first image before the image processing of the first image and the second image is performed. And after the first gray level image of the first image is obtained, performing brightness alignment operation on the first gray level image by taking the second image as a reference image to obtain the first gray level image with the brightness aligned.
Referring to fig. 13, the luminance value ranges of the first gray-scale image and the second gray-scale image before the luminance alignment are greatly different, and the luminance value ranges of the first gray-scale image and the second gray-scale image after the luminance alignment process are less different, and may be in the same luminance value range.
In some embodiments, the processor may obtain a ratio of the first exposure time period to the second exposure time period. For example, the first exposure period is 1/G of the second exposure period. The processor multiplies the gray value of each pixel in the first gray image by the ratio to obtain the target gray value of each pixel. The target gray value of each pixel is gray value x G. Thus, the processor obtains the first gray-scale image after the brightness alignment based on the target gray-scale value of each pixel.
In this embodiment, since the second image acquired with the second exposure time period has a larger light incoming amount, higher brightness and higher image quality, the brightness alignment is performed on the first image, so that the brightness difference between the first image and the second image is further eliminated, and the processing error of the subsequent image processing based on the first image and the second image is reduced.
S2043, the processor performs image registration on the first gray level image and the second gray level image with the brightness aligned, and a homography matrix is obtained.
The homography matrix is used for representing the corresponding relation between each pixel of the first gray level image and each pixel of the second gray level image after brightness alignment.
In this embodiment, the processor performs feature point matching on each pixel point in the first gray-scale image and each pixel point in the second gray-scale image after brightness alignment, so as to realize image registration of the first gray-scale image and the second gray-scale image. Illustratively, the image registration algorithm includes a gray scale and template based registration algorithm, a feature based matching algorithm, and the like.
In this embodiment, the processor performs registration of the second gray level image with the first gray level image as a matching frame, and obtains a homography matrix representing a correspondence between each pixel of the first gray level image and each pixel of the second gray level image.
In some embodiments, the processor performs image registration of the first gray scale image with the second gray scale image, which may be whole image based registration; the registration of the partial images may be, for example, image registration based on the partial images corresponding to the moving regions of the first gray scale image and the second gray scale image.
In some embodiments, the processor may perform image registration based on the one or more second gray scale images and the first gray scale image after the brightness alignment, and perform image registration based on the plurality of second gray scale images, so that the homography matrix is more accurate.
And S2044, carrying out affine transformation on the first image by the processor based on the homography matrix, and acquiring a first local image of the first image.
In this embodiment, after the image registration is completed, the processor performs affine transformation on the first image according to the homography matrix obtained by calculation. The affine transformation comprises image processing of two-dimensional planes such as image rotation, image translation, image scaling, image flipping and the like. The processor performs operations such as image rotation, image translation, image scaling and the like on a first local image in the first image to obtain a transformed first image. And acquiring a first local image corresponding to the local motion area in the first image based on the transformed first image and the determined local motion area in the shooting picture.
In some embodiments, the quality of the first image obtained by short exposure may be poor, and after obtaining the transformed first partial image, the processor may further perform image post-processing such as noise reduction processing and super-division processing on the first partial image, so as to improve the image quality of the first partial image.
S2045, the processor acquires the target image based on the first partial image and the second image.
In some embodiments, the processor may replace the first partial image with the second partial image in the second image to obtain the target image.
Or in some embodiments, the processor may further extract a second local image in the second image, and reject the second local image, so as to fuse and splice the remaining area in the second image with the first local image, and obtain the target image.
In this embodiment, the processor performs brightness alignment on the first gray-scale image and the second gray-scale image, so that the gray-scale image can further reduce brightness difference to a certain extent, and performs image registration on the first gray-scale image and the second gray-scale image after brightness alignment, so that the homography matrix obtained by calculation is more accurate. Affine transformation of the first image and the second image is carried out based on the homography matrix, and the obtained target image is more accurate and has higher image quality.
In some embodiments, the processor, after obtaining the brightness aligned first gray scale image, obtains a homography matrix, referring to fig. 14, comprising:
And S20431, performing histogram equalization processing on the first gray level image and the second gray level image with the aligned brightness by the processor to obtain a first equalized image corresponding to the first gray level image and a second equalized image corresponding to the second gray level image with the aligned brightness.
In this embodiment, before image registration is performed, the binarized gray-scale image may be subjected to histogram equalization, and the gray-scale image may be changed from a certain gray-scale section in the comparative set to a uniform distribution in the entire gray-scale range. The histogram equalization algorithm includes, for example, a global histogram equalization algorithm, a local histogram equalization algorithm, a frequency division and fusion-based histogram equalization algorithm, and the like.
The global histogram equalization algorithm is to map the gray level of the input image to the output image by using a transformation function, so that each gray level of the output image is relatively uniformly distributed, and the contrast of the image is enhanced. The local histogram equalization algorithm may better enhance the local detail of the image compared to the global approach. The histogram equalization algorithm based on frequency division and fusion is used for carrying out histogram equalization processing on the low-frequency component in consideration of separating the high-frequency component from the low-frequency component of the image, carrying out linear weighting enhancement on the high-frequency component, and then fusing the high-frequency component and the low-frequency component, so that the problems of image detail information loss and noise amplification caused by the histogram equalization algorithm can be avoided. The specific algorithm of histogram equalization is not limited in this embodiment.
And S20432, the processor performs image registration on the first balanced image and the second balanced image to obtain a homography matrix.
In this embodiment, the processor obtains the first balanced image and the second balanced image with gray values in all gray ranges, and then performs image registration to obtain the homography matrix. Reference is made to S2043, which is not described here in detail.
In some embodiments, after the processor obtains the homography matrix. The processor may also perform brightness alignment on the first image and the second image, and perform acquisition of the target image based on the brightness-aligned first image and second image, and the homography matrix. On the basis of the homography matrix obtained by the embodiment shown in fig. 13 or fig. 14, referring to fig. 15, the method includes:
S2046, the processor performs brightness alignment on the first image and the second image to obtain a first image with the brightness aligned.
The first image is an image obtained by short exposure, and has a characteristic of low light input and low brightness. Referring to the first image and the second image shown in fig. 15, it is apparent that the luminance value range of the first image is greatly different from the luminance value range of the second image, and the luminance of the first image is relatively low. Therefore, before performing affine transformation of the first image and the second image based on the homography matrix, it is also necessary to perform luminance alignment on the first image and the second image.
Referring to the first image with brightness aligned in fig. 15, the difference between the brightness value ranges of the first image and the second image is smaller, and the brightness of the first image with brightness aligned is higher than that of the original first image. For example, reference may be made to S2042, where the pixel value of each pixel in the first image is multiplied by the ratio to obtain the target pixel value of each pixel. Wherein, the ratio refers to the ratio of the first exposure time period to the second exposure time period. The pixel value may be an average value of pixel values of each pixel point in red, blue and green channels.
Referring to fig. 15, the brightness of the first image and the second image after brightness alignment are within the same brightness range.
And S2047, carrying out affine transformation on the first image with the brightness aligned based on the homography matrix by the processor, and acquiring a first local image.
After performing luminance alignment on the original image (first image), the processor performs affine transformation based on the luminance-aligned first image. For example, the brightness-aligned first image is subjected to operations such as image rotation, image translation, image scaling, image inversion, and the like, so as to obtain a transformed first image. And acquiring a first local image corresponding to the local motion area in the first image based on the transformed first image and the determined local motion area in the shooting picture. For example, the transformed first image may be subjected to corresponding region cropping according to the local motion region, so as to obtain the first local image. Reference is made to S2044, which is not described here in detail.
And S20451, the processor replaces the second partial image of the second image with the first partial image, and the target image is acquired.
In this embodiment, the processor may directly replace the first partial image with the second partial image of the second image, thereby obtaining the target image. Referring to fig. 15, the processor cuts out a first partial image in the first image, replaces the first partial image in the second image, and replaces the second partial image with the first partial image to obtain a target image. Referring to fig. 15, since the first partial image in the first image is a clear hand image obtained by short exposure, and the second partial image in the second image has a problem of blurring, the clear first partial image is replaced with the blurred second partial image, so that a target image with clear hand and higher overall image quality is obtained. The target image not only ensures that the quality of the whole image can reach the level of not carrying out the exposure reduction, but also solves the problem of image blurring corresponding to the local motion area by fusing the local motion area with short exposure.
In this embodiment, when detecting that a local motion area exists in a shooting picture based on a preview image, a user clicks to shoot, and adopts multi-shift exposure shooting, a first image is acquired with a first exposure time length, and a second image is acquired with a second exposure time length, so as to perform multi-frame registration of the first image and the second image. Aiming at the local motion area, the brightness of a first image obtained by short exposure is aligned to a second image obtained by normal exposure, the first local image is cut and intercepted according to the coordinates of the local motion area, and the first local image is used for replacing the second local image in the second image, so that the definition and the dynamic range of the whole image are ensured, and the condition of motion blur is avoided through short exposure.
Based on the photographing method provided by the embodiment, low image film forming rate caused by local motion can be effectively reduced, and the whole image quality of an algorithm is ensured, so that the experience of a user is improved. The problems of motion blur or image quality reduction caused by local motion in a picture during photographing are solved, and photographing experience of a user is improved.
In some embodiments, the electronic device may prompt the user to start the multi-frame variable exposure shooting mode to take a picture by displaying a corresponding control on the display screen.
In some embodiments, the photographing mode of the camera application of the electronic device includes a multi-frame variable exposure photographing mode, and the photographing method further includes:
and if the local movement area exists in the shooting picture, the electronic equipment displays a guide control on the preview interface.
The guiding control is used for representing whether to start a multi-frame variable exposure shooting mode.
In some embodiments, the electronic device may display a guide control on a preview interface of the camera in determining that there is a local motion region in the captured image. Referring to fig. 16, the electronic device displays a guide control 1601 in the form of a hover frame at the top of the preview interface.
And the electronic equipment responds to the touch operation of the user on the target control, acquires a first image with a first exposure time length and acquires a second image with a second exposure time length.
In some embodiments, if the user turns on the multi-frame variable exposure shooting module through the guide control, the electronic device may display a prompt 1602, and referring to fig. 16, the prompt 1602 is used to prompt the user that the multi-frame variable exposure shooting mode is turned on. Meanwhile, when the electronic equipment receives a photographing operation, a first image is acquired in a first exposure time period, and a second image is acquired in a second exposure time period.
In this embodiment, through displaying the guiding information, interaction with the user is increased, and through touch operation of the user, it is further confirmed that the multi-frame exposure-changing shooting mode is started to shoot, so that the obtained target image can meet the requirements of the user, and shooting experience of the user is improved.
In some embodiments, the electronic device may also display a prompt control on the preview interface upon determining that the local motion region is present in the captured image.
In some embodiments, the electronic device may also display a prompt control on a preview interface of the camera after determining that a local motion area exists in the captured image. Referring to FIG. 17, a prompt control 1701 may be displayed in alignment with a mode control of the preview area.
In this embodiment, through displaying the prompt control, the user can perceive that the multi-frame exposure-changing shooting mode is started to shoot, so that the obtained target image can meet the requirement of the user, and the shooting experience of the user is improved.
Embodiments of the present application also provide a system-on-a-chip (SoC) including at least one processor 701 and at least one interface circuit 702, as shown in fig. 18. The processor 701 and the interface circuit 702 may be interconnected by wires. For example, interface circuit 702 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 702 may be used to send signals to other devices (e.g., processor 701 or a camera of an electronic device). The interface circuit 702 may, for example, read instructions stored in a memory and send the instructions to the processor 701. The instructions, when executed by the processor 701, may cause the electronic device to perform the various steps of the embodiments described above. Of course, the system-on-chip may also include other discrete devices, which are not particularly limited in accordance with embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium including computer instructions that, when executed on an electronic device described above, cause the electronic device to perform the functions or steps performed by the electronic device 100 in the method embodiments described above.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the functions or steps performed by the electronic device 100 in the method embodiments described above. For example, the computer may be the electronic device 100 described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. The photographing method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a camera and a display screen, and comprises the following steps:
The electronic equipment displays a preview interface through the display screen; the preview interface comprises a preview image of the shooting picture obtained by the camera;
the electronic equipment determines whether a local motion area exists in the shooting picture or not based on the preview image;
The electronic equipment acquires a target area of the local motion area; the target area is larger than the local motion area;
When the electronic equipment receives shooting operation of a user, if the local motion area exists in the shooting picture, and if the target area is smaller than a preset area threshold value, the electronic equipment acquires a first image of the shooting picture in a first exposure time period, and acquires a second image of the shooting picture in a second exposure time period, wherein the first exposure time period is smaller than the second exposure time period;
The electronic equipment acquires a target image of the shooting picture based on a first local image and a second local image which correspond to the local motion area in the first image; the electronic equipment acquires a first gray level image of the first image and a second gray level image of the second image; the electronic equipment performs brightness alignment on the first gray level image and the second gray level image to obtain a first gray level image with the aligned brightness; the electronic equipment performs image registration on the first gray level image and the second gray level image after the brightness alignment to obtain a homography matrix; the homography matrix is used for representing the corresponding relation between each pixel of the first gray level image and each pixel of the second gray level image after the brightness alignment; the electronic equipment carries out affine transformation on the first image based on the homography matrix to obtain a first local image of the first image; the electronic device acquires the target image based on the first partial image and the second image.
2. The method of claim 1, wherein the electronic device determining whether a local motion region exists in the captured picture based on the preview image comprises:
The electronic equipment acquires a first preview image and a second preview image of the shooting picture; the first preview image and the second preview image are adjacent image frames;
The electronic equipment acquires a difference image based on a gray level difference value between a gray level value of a first pixel in the first preview image and a gray level value of a corresponding second pixel in the second preview image;
and if the number of pixels of which the gray level difference value is larger than a preset threshold value in the differential image is in a preset numerical range, determining that the local motion area exists in the shooting picture.
3. The method according to claim 2, wherein the method further comprises:
and the electronic equipment performs connected domain analysis on the region formed by the pixels with the gray difference value larger than the preset threshold value, and at least one connected region formed by the adjacent pixels with the same gray difference value is obtained, so that at least one local motion region corresponding to the connected region is obtained.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
The electronic equipment acquires a first preview local image corresponding to a local motion area of the first preview image and a second preview local image corresponding to a local motion area of the second preview image;
the electronic equipment determines a motion estimation value of the local motion area according to the coordinate difference between a third pixel in the first preview local image and a corresponding fourth pixel in the second preview local image;
the electronic equipment determines the first exposure time according to the motion estimation value; the larger the motion estimation value is, the shorter the first exposure time is.
5. The method of claim 1 or 2, wherein the first exposure time period is longer than 1/64 of the second exposure time period.
6. The method of claim 4, wherein the determining, by the electronic device, the first exposure duration based on the motion estimation value comprises:
the electronic equipment determines a first amplitude value and a second amplitude value of the local motion area based on the diagonal length of the view angle of the shot picture, wherein the second amplitude value is smaller than the first amplitude value;
the electronic equipment takes the product of the motion estimation value and a preset exposure interval as a first parameter;
If the first parameter is greater than or equal to the first amplitude value, the electronic device determines that the first exposure duration is a first duration;
If the first parameter is smaller than or equal to the second amplitude value, the electronic device determines that the first exposure duration is a second duration;
If the first parameter is greater than the second amplitude value and less than the first amplitude value, the electronic device determines that the first exposure duration is a third duration;
wherein the third time period is longer than the first time period and shorter than the second time period; the first time period, the second time period and the third time period are determined according to different multiples of the second exposure time period.
7. The method of claim 1, wherein the electronic device performs brightness alignment on the first gray scale image and the second gray scale image to obtain a brightness-aligned first gray scale image, comprising:
the electronic equipment obtains the ratio of the first exposure time length to the second exposure time length;
The electronic equipment multiplies the gray value of each pixel in the first gray image by the ratio to obtain the target gray value of each pixel;
And obtaining the first gray level image after the brightness alignment based on the target gray level value of each pixel.
8. The method of claim 7, wherein the electronic device image registering the brightness-aligned first grayscale image with the second grayscale image to obtain a homography matrix, comprising:
The electronic equipment carries out histogram equalization processing on the first gray level image and the second gray level image after the brightness alignment to obtain a first equalization image corresponding to the first gray level image and a second equalization image corresponding to the second gray level image after the brightness alignment;
And the electronic equipment performs image registration on the first balanced image and the second balanced image to acquire the homography matrix.
9. The method of claim 1, wherein the electronic device affine transforming the first image based on the homography matrix to obtain a first partial image of the first image, comprising:
The electronic equipment performs brightness alignment on the first image and the second image to obtain a first image with the aligned brightness;
The electronic equipment carries out affine transformation on the first image with the aligned brightness based on the homography matrix to obtain the first local image; the affine transformation includes at least one operation of rotation, translation, scaling, and flipping.
10. The method of claim 1, wherein the electronic device acquiring the target image based on the first partial image and the second image comprises:
and the electronic equipment replaces the second partial image of the second image with the first partial image, and acquires the target image.
11. The method of claim 1 or 2, wherein the photographing mode of the electronic device comprises a multi-frame variable exposure photographing mode, the method further comprising:
If the local movement area exists in the shooting picture, the electronic equipment displays a guide control on the preview interface; the guide control is used for representing whether the multi-frame variable exposure shooting mode is started or not;
The electronic device obtains a first image of the shooting picture with a first exposure time length and obtains a second image of the shooting picture with a second exposure time length, and the method comprises the following steps:
and the electronic equipment responds to the touch operation of the user on the guide control, acquires the first image with the first exposure time length and acquires the second image with the second exposure time length.
12. An electronic device comprising a camera, a display screen, a memory, and one or more processors; the camera, the display screen and the memory are coupled with the processor; the memory has stored therein computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-11.
13. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
CN202311481102.5A 2023-11-08 2023-11-08 Photographing method and electronic equipment Active CN117201930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311481102.5A CN117201930B (en) 2023-11-08 2023-11-08 Photographing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311481102.5A CN117201930B (en) 2023-11-08 2023-11-08 Photographing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN117201930A CN117201930A (en) 2023-12-08
CN117201930B true CN117201930B (en) 2024-04-16

Family

ID=88990995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311481102.5A Active CN117201930B (en) 2023-11-08 2023-11-08 Photographing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117201930B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395997A (en) * 2017-08-18 2017-11-24 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
AU2018282254A1 (en) * 2018-12-17 2020-07-02 Canon Kabushiki Kaisha System and method for determining a three-dimensional position of a person
CN113592887A (en) * 2021-06-25 2021-11-02 荣耀终端有限公司 Video shooting method, electronic device and computer-readable storage medium
CN113630545A (en) * 2020-05-07 2021-11-09 华为技术有限公司 Shooting method and equipment
CN116744120A (en) * 2022-09-15 2023-09-12 荣耀终端有限公司 Image processing method and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107395997A (en) * 2017-08-18 2017-11-24 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
AU2018282254A1 (en) * 2018-12-17 2020-07-02 Canon Kabushiki Kaisha System and method for determining a three-dimensional position of a person
CN113630545A (en) * 2020-05-07 2021-11-09 华为技术有限公司 Shooting method and equipment
CN116582741A (en) * 2020-05-07 2023-08-11 华为技术有限公司 Shooting method and equipment
CN113592887A (en) * 2021-06-25 2021-11-02 荣耀终端有限公司 Video shooting method, electronic device and computer-readable storage medium
CN116744120A (en) * 2022-09-15 2023-09-12 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
CN117201930A (en) 2023-12-08

Similar Documents

Publication Publication Date Title
US11800221B2 (en) Time-lapse shooting method and device
CN113132620B (en) Image shooting method and related device
CN116582741B (en) Shooting method and equipment
WO2021078001A1 (en) Image enhancement method and apparatus
CN113706414B (en) Training method of video optimization model and electronic equipment
CN108513069B (en) Image processing method, image processing device, storage medium and electronic equipment
CN114463191B (en) Image processing method and electronic equipment
CN115242983A (en) Photographing method, electronic device, computer program product, and readable storage medium
CN115272138A (en) Image processing method and related device
CN117499779B (en) Image preview method, device and storage medium
CN113452969B (en) Image processing method and device
WO2021180046A1 (en) Image color retention method and device
CN116916151B (en) Shooting method, electronic device and storage medium
CN114390212B (en) Photographing preview method, electronic device and storage medium
WO2023011302A1 (en) Photographing method and related apparatus
CN117201930B (en) Photographing method and electronic equipment
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN117132515A (en) Image processing method and electronic equipment
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
CN117395495B (en) Image processing method and electronic equipment
WO2023160221A1 (en) Image processing method and electronic device
CN115623319B (en) Shooting method and electronic equipment
CN115988339B (en) Image processing method, electronic device, storage medium, and program product
WO2024109224A1 (en) Photographic mode recommendation method
CN116664701A (en) Illumination estimation method and related equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant