CN114531551B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114531551B
CN114531551B CN202111673229.8A CN202111673229A CN114531551B CN 114531551 B CN114531551 B CN 114531551B CN 202111673229 A CN202111673229 A CN 202111673229A CN 114531551 B CN114531551 B CN 114531551B
Authority
CN
China
Prior art keywords
image
area
acquired
region
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111673229.8A
Other languages
Chinese (zh)
Other versions
CN114531551A (en
Inventor
杨双新
徐卓然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111673229.8A priority Critical patent/CN114531551B/en
Publication of CN114531551A publication Critical patent/CN114531551A/en
Application granted granted Critical
Publication of CN114531551B publication Critical patent/CN114531551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, comprising the following steps: acquiring a first acquisition image through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area; obtaining an input operation for determining a first image area; the input operation is used for changing the display effect of the second image area; obtaining a second shooting parameter in response to the input operation; obtaining a second acquired image through a camera; the second acquired image is obtained when the shooting parameters of the camera are second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the technical field of electronic devices, and relates to, but is not limited to, image processing methods and apparatuses, electronic devices, and storage media.
Background
Since the photographed image has a highlight or a dark area, a difference in the highlight or the dark area seriously affects the display effect of the entire image when the image is displayed. The high light area or/and the low light area affects the user's view of the photographed object/object for the corresponding area.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in one aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first acquisition image through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region;
obtaining an input operation for determining the first image region; the input operation is used for changing the display effect of the second image area;
obtaining a second shooting parameter in response to the input operation;
obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.
In yet another aspect, an embodiment of the present application provides an electronic device, including:
the camera is used for obtaining a first acquired image and a second acquired image; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; the second acquired image is obtained when the shooting parameters of the camera are second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
the display screen is used for displaying the preview image and obtaining input operation for determining a first image area in the preview image; the preview image includes the first image region and a second image region different from the first image region; the input operation is used for changing the display effect of the second image area;
a processor for determining the first captured image as the preview image; and responding to the input operation to obtain the second shooting parameters.
In still another aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a first acquisition image through the camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
The processing module is used for determining the first acquired image as a preview image and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region;
an acquisition module for acquiring an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area; and responding to the input operation to obtain a second shooting parameter.
In yet another aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs steps in the above-described method.
The beneficial effects that technical scheme that this application embodiment provided include at least:
in an embodiment of the present application, an input operation for determining the first image area is obtained; the input operation is used for changing the display effect of the second image area; obtaining a second shooting parameter in response to the input operation; obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image. In this way, the image can be partitioned by the input operation, and the display effect of the different image areas is changed, thereby controlling the effect of the different image areas presented in the whole image.
Drawings
For a clearer description of the technical solutions in the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art, wherein:
FIG. 1A is a schematic diagram of a matrix photometry method in the related art;
FIG. 1B is a schematic diagram of a central weighted average photometry in the related art;
FIG. 1C is a schematic diagram of a related art spot photometry method;
FIG. 2A is a schematic diagram of an application scenario of a central heavy point photometry method;
fig. 2B is a schematic diagram of an application scenario of a light measurement mode selection interface in the related art;
FIG. 2C is a schematic view of an application scenario of a composite image in the related art;
fig. 3 is a schematic hardware entity diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 5 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 6A is a schematic flow chart of an image processing method according to an embodiment of the present application;
Fig. 6B is an application scenario schematic diagram of an image processing method according to an embodiment of the present application;
fig. 6C is an application scenario schematic diagram of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 9 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first\second\third" in relation to the embodiments of the present application is merely to distinguish similar objects and does not represent a specific ordering for the objects, it being understood that the "first\second\third" may be interchanged in a specific order or sequence, where allowed, to enable the embodiments of the present application described herein to be practiced in an order other than that illustrated or described herein.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which embodiments of this application belong unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the related art, the light metering is a method for determining exposure time by an electronic device for capturing an image, and a common light metering method includes average light metering, i.e. matrix light metering, central weighted average light metering and spot light metering. FIG. 1A is a schematic diagram of a matrix photometry method in the related art; as shown in fig. 1A, in the matrix photometry method, an image is divided into several areas vertically and horizontally, and the central area 11 is used as the main photometry basis, and then is averaged into the rest areas. FIG. 1B is a schematic diagram of a central weighted average photometry in the related art; as shown in fig. 1B, in the central weighted average photometry method, the image center 12 is calculated according to different weighting coefficients, the weight of the center 12 is the largest, the weight is smaller toward the edge of the picture, and the final value is the photometry value. FIG. 1C is a schematic diagram of a related art spot photometry method; as shown in fig. 1C, in the spot photometry method, photometry is performed on one spot 13 without being affected by other areas.
Because the dynamic range of the electronic device for shooting the image is far lower than that of the actual shooting scene, no matter which photometry method is adopted, the problem that the shooting subject presents an effect in the whole image because the participation degree of high light or dark areas to photometry is overlarge is generated in the shot image. For example, fig. 2A is a schematic diagram of an application scenario of the central heavy spot light measurement method, and as shown in fig. 2A, a highlight region 21 outside a portrait has a great influence on light measurement, resulting in a shortened exposure time, and an underexposed portrait 22.
To solve the above problems, two solutions are provided in the related art: 1) A menu of choices for photometric modes is provided. And providing a selection menu of a photometry method in the electronic equipment for shooting the image, so that a user can select a photometry mode according to actual needs. Fig. 2B is a schematic view of an application scenario of a related art light metering mode selection interface, as shown in fig. 2B, where a light metering mode selection menu 23 includes a light metering method of central weighted average light metering, matrix light metering and spot light metering, and a user may click on the selection menu to select a light metering mode.
2) And (3) adopting a synthetic image (HDR) technology to press the brightness of a high-light area and improve the brightness of a dark area.
The following problems exist for the method of selecting a menu described above: many shooting users do not understand the meaning of photometry, and in order to achieve the best shooting experience, the electronic device usually adopts a central-heavy-point photometry method by default, and automatically switches to face-heavy photometry after detecting a portrait. Only in the professional mode, the control right of the photometric mode is completely given to the user. Even if the user selects the photometry mode, if the brightness of the non-main area in the picture is at two extreme, the photometry will be greatly affected under the condition that the non-main area has smaller weight, so that the shooting main body is not highlighted.
The following problems exist for methods employing HDR technology: first, there is a case where the synthesized images are inconsistent. HDR technology synthesizes one image from three images, and there may be inconsistencies in the synthesized image because there may be unknown differences in elements on the three images. For example, when a scene has moving elements, a drag effect may occur. Second, the HDR technique does not actually increase the dynamic range of the image, which can result in a squeeze in the area contrast between the high light and dark areas during the HDR process of suppressing the high light area luminance and increasing the dark area luminance. As shown in fig. 2C below, HDR photography does achieve the goal of compacting the highlight region 24 and improving the brightness of the human body, but at the cost of reduced contrast in the region 25, relative to normal photography.
In order to solve the above-mentioned problems, the present application provides an image processing method, which is applied to an electronic device, and before introducing the image processing method provided in the embodiment of the present application, a hardware entity diagram of the electronic device is introduced, fig. 3 is a hardware entity diagram of the electronic device provided in the embodiment of the present application, and as shown in fig. 3, the electronic device 300 includes a camera 301, a display screen 302, and a processor 303:
the camera 301 is configured to obtain a first acquired image and a second acquired image; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; the second acquired image is obtained when the shooting parameters of the camera are second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
a display screen 302 for displaying a preview image, and obtaining an input operation for determining a first image area in the preview image; the preview image includes the first image region and a second image region different from the first image region; the input operation is used for changing the display effect of the second image area;
A processor 303 for determining the first captured image as the preview image; and responding to the input operation to obtain the second shooting parameters.
In one implementation, the second image region is determined based on the first image region.
In one implementation, the processor 303 is further configured to: obtaining a position of a display object layered on the preview image; the position of the display object is changed based on an input operation; and determining the area of the preview image covered by the display object as a first image area based on the position of the display object.
In one implementation, the display object may be adjustable in area and/or shape; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart.
In one implementation, the processor 303 is further configured to: obtaining an input track; and determining a partial region of the preview image corresponding to the input track as a first image region based on the input track.
In one implementation, the processor 303 is further configured to: processing the first output image based on an identification algorithm to obtain at least two image areas in the first input image; displaying the indication items of the at least two image areas in the preview image; determining a target image area based on a selection operation for the indication item; wherein the target image area is a first image area of the preview image.
In one implementation, the processor 303 is further configured to: determining the second image area and determining a target calculation area participating in photometry; obtaining brightness based on the content of each sub-region in the target calculation region; obtaining exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 4 is a schematic flow chart of the image processing method provided in the embodiment of the present application, applied to the electronic device, as shown in fig. 4, where the method at least includes the following steps:
step S401, a first acquisition image is obtained through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
Here, the camera may include a lens and a photosensitive element. The photosensitive components are adjusted according to shooting parameters. For example, the photosensitive component adjusts the exposure length when collecting the image according to the exposure parameters.
Here, the first acquired image is an image acquired by the camera in real time. Here, the first photographing parameters may include: aperture and exposure time; controlling the depth of field of the shot image through the aperture; and controlling the brightness of the image through the exposure time.
Step S402, determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region;
here, the process of determining the first captured image as a preview image and displaying the preview image is real-time.
Here, the first image region may be a region selected by a user in a preview. In one implementation, a user selects a first image region in a preview image, and after the first image region is determined, a portion of the preview image other than the first image region is determined as a second image region.
Illustratively, as shown in fig. 2C, the user determines the region 24 as a first image region, and then, the region other than the region 24 in the preview image is determined as a second image region.
Step S403 of obtaining an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area;
the input operation is used for determining the first image area, and the input operation includes various modes, which can be manually input for a user or can be used for adjusting a preset area for the user. The preset area may include multiple types, and may be an area of a video window laminated in a preset image, an area of an AR sticker, or an area of a watermark.
In one possible manner, changing the display effect of the second image area by the input operation may be: and adjusting the region participating in photometry in the preview image through input operation, so as to adjust shooting parameters and further adjust the display effect of the second region.
In one implementation, the display effect of the second image area may be brightness, and before changing the brightness of the second image area, the brightness of the second image area may be greater than or less than the average brightness of the preview image. By changing the brightness of the second image area, the brightness of the second image area can be made brighter or darker.
In one possible implementation, the preview image is subjected to photometry, the first image area is determined as a photometry removal area, and after the first image area is removed, the brightness of the preview image may be brighter or darker, and the brightness change of the preview image is related to the brightness of the first image area.
Illustratively, if the brightness of the first image area is large, the exposure time of the preview image becomes long after the first image area is removed, and the entire preview image becomes bright.
Illustratively, if the brightness of the first image area is smaller, the exposure time of the preview image becomes shorter and the entire preview image becomes darker after the first image area is removed.
Step S404, responding to the input operation to obtain a second shooting parameter;
here, the second shooting parameter may be an exposure time, and the photosensitive component in the camera adjusts the exposure time when the image is acquired based on the exposure time.
Step S405, obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.
Here, "first" and "second" in the first acquired image and the second acquired image are used to distinguish acquired images obtained by the cameras under different photographing parameters, and are not used to define a temporal relationship when different acquired images are obtained.
Here, a frame image of an arbitrary frame may be included between the first acquired image and the second acquired image. Illustratively, the camera acquires N frame images in real time, the first acquired image may be a 1 st frame image of the N frame images, the second acquired image may be an N-1 st frame image of the N frame images, and the frame images including N-2 frames between the first acquired image and the second acquired image, where N may be any positive integer greater than or equal to 3.
Here, the display effect may be brightness of an image, and after the photographing parameter is adjusted from the first photographing parameter to the second photographing parameter, brightness of the second image area in the first captured image is different from brightness of the second image area in the second captured image.
Here, the second captured image may be a preview image or an image stored in a specified format. In one implementation, when the user browses the preview image, a photographing action is performed, and the electronic device stores the second captured image in the memory, where the storage format of the second captured image may be JPG/RAW format. In one implementation, the second photographing parameter is effective for both the preview image and the photographed image obtained after performing the photographing operation.
In the above embodiment, an input operation for determining the first image area is obtained; the input operation is used for changing the display effect of the second image area; obtaining a second shooting parameter in response to the input operation; obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image. In this way, the image can be partitioned by the input operation, and the display effect of the different image areas is changed, thereby controlling the effect of the different image areas presented in the whole image.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 5 is a schematic flow chart of the image processing method provided in the embodiment of the present application, as shown in fig. 5, where the method at least includes the following steps:
step S501, a first acquired image is obtained through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
step S502, determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region;
Step S503, obtaining the position of the display object overlapped on the preview image; the position of the display object is changed based on an input operation; determining the area of the preview image covered by the display object as a first image area based on the position of the display object; the input operation is used for changing the display effect of the second image area;
here, the display object may be a video window layered on the preview image, and the position of the display object is changed based on the input operation, that is, the display object may be dragged, and after the dragging, the display object may be located at any position in the preview image.
In one implementation, the display object may be used to block an area in the preset image where the display effect does not satisfy the user's desired effect. For example, the user may occlude an excessively bright region or an excessively dark region in the preview image.
Step S504, responding to the input operation to obtain a second shooting parameter;
step S505, a second acquired image is obtained through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.
In one implementation, the second image region is determined based on the first image region.
Here, the first image area is dynamically variable, and the second image area is determined according to a change in the first image area.
In one implementation, the display object may be adjustable in area and/or shape; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart.
Illustratively, user a calls user B for a video call, the video frame image of user a is of a large window, the video frame image of user B is of a small window, and adjustment of the display effect, for example, adjustment of brightness, is performed by dragging the position of the small window in the large window.
In an exemplary embodiment, when the user a calls the user B to perform a video call, the video frame image of the user a is a small window, the video frame image of the user B is a large window, the user a determines that the brightness of the portrait is too low when the user B is displayed in the large window, and adjusts the brightness of the preview image in the large window by dragging the small window, so that the brightness of the portrait in the large window of the user B is improved, and when the brightness is determined to be improved to a target value, the position of the small window in the large window and the size of the small window are sent to the user B, so that the electronic device used by the user B adjusts the brightness of the preview image of the user B according to the received position and size, and the video frame image of the portrait meeting the target value is output. In the above process, the second captured image is obtained in response to the second photographing parameter, not obtained by the electronic device but obtained by another device. In the above process, the first acquired image is acquired by the electronic device used by the user B; and transmitting the first acquired image to electronic equipment used by the user A, obtaining a second shooting parameter after the brightness adjustment operation is carried out by the user A, shooting the second acquired image according to the second shooting parameter, and displaying the second acquired image by the user A. In the process, the camera module is adjusted according to the exposure parameters calculated by the opposite side equipment, the exposure parameters are calculated according to the position of the small window in the large window and the size of the small window determined by the opposite side electronic equipment, and the camera module is adjusted according to the calculated exposure parameters.
In one implementation manner, when the window is used for displaying an image acquired by another camera of the electronic device, the two cameras may be front cameras and rear cameras respectively, and one of the cameras is a main camera, the main camera is a large window, and the other camera is a small window, and the small windows are stacked on the large window. The area, shape and position of the portlets through which the first image area can be determined are adjustable.
In the above embodiment, in one aspect, the second image area is determined based on the first image area. Therefore, the first image area and the second image area can be enabled to be dynamically changeable, the adjustment by a user is facilitated, and the requirement of the user for adjusting the display effect of the preview image is met.
On the other hand, a position of a display object layered on the preview image is obtained; the position of the display object is changed based on an input operation; and determining the area of the preview image covered by the display object as a first image area based on the position of the display object. Therefore, the method can select and cover partial areas in the preview image by adjusting the display object, and meet the requirement of adjusting the display effect of the preview image by a user.
In yet another aspect, the display object is adjustable in area and/or shape; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart. Therefore, the window can be used for covering a partial area in the preview image, and the requirement of a user for adjusting the display effect of the preview image can be met by adjusting the area and/or the shape of the display object.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 6A is a schematic flow chart of the image processing method provided in the embodiment of the present application, as shown in fig. 6A, where the method at least includes the following steps:
step S601, obtaining a first acquisition image through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
step S602, determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region; the second image region is determined based on the first image region;
Step S603, obtaining an input track; determining a partial region of the preview image corresponding to the input track as a first image region based on the input track; the input operation is used for changing the display effect of the second image area;
here, the input trajectory may be a trajectory when a region is defined in the preview image in a process of defining the first image region for the user.
Illustratively, as shown in fig. 6B, the input trajectory may be a trajectory shown by a broken line when the user delineates the first image region 61.
Step S604, responding to the input operation to obtain a second shooting parameter;
step S605, obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.
In one implementation, after the selected area 61, a portion of the non-selected area 61 is determined to be a second image area of the first acquired image, a second shooting parameter is obtained based on the second image area, a second acquired image is obtained according to the second shooting parameter, and the obtained second acquired image is shown in fig. 6C, and in fig. 6C, the portion of the non-selected area 61 is the second image area of the second acquired image, and the brightness of the second image area of the second acquired image is higher than the brightness of the second image area of the first acquired image.
In the above embodiment, the input trajectory is obtained; and determining a partial region of the preview image corresponding to the input track as a first image region based on the input track. Therefore, the first image area can be customized, the edge of the first image area is closer to the expectations of the user, the shape of the area meeting the requirements of the user is obtained, and the requirement of the user for adjusting the display effect of the preview image is met.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 7 is a schematic flow chart of the image processing method provided in the embodiment of the present application, as shown in fig. 7, where the method at least includes the following steps:
step S701, obtaining a first acquired image through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
step S702, determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region; the second image region is determined based on the first image region;
step S703, processing the first output image based on an identification algorithm, to obtain at least two image areas in the first input image;
Here, the recognition algorithm may include a face recognition algorithm, or a depth recognition algorithm. The face recognition algorithm is used for recognizing the face in the preview image. In one implementation, a non-face region may be determined as a first image region. The depth of field recognition algorithm is used to identify a foreground region in the preview image, and in one implementation, a non-foreground region may be determined as the first image region.
Step S704, displaying the indication items of the at least two image areas in the preview image;
here, the indication item may be an identification of an area; the selection operation based on the indication item can be selecting the area where the mark is located; the selection operation can be a touch operation on a display screen detected by the electronic equipment; the at least two image areas may be of a regular shape, for example, rectangular; but may also be irregularly shaped, such as the shape of the outline of the portrait edge.
Illustratively, according to the face recognition algorithm, two regions are determined: the face area and the non-face area, the mark of the face area is area 1; the identification of the non-face area is area 2; in the case that the area 1 is selected, the area 1 is determined as a target image area, which is a first image area. At this time, the region 2 is the second image region.
Step S705 of determining a target image area based on a selection operation for the instruction item; wherein the target image area is a first image area of the preview image; the input operation is used for changing the display effect of the second image area;
illustratively, when the non-face region is determined as the target image region, the non-face region is determined as the first image region of the preview image.
Step S706, responding to the input operation to obtain a second shooting parameter;
step S707, obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.
In the above embodiment, the first output image is processed based on an identification algorithm, and at least two image areas in the first input image are obtained; displaying the indication items of the at least two image areas in the preview image; determining a target image area based on a selection operation for the indication item; wherein the target image area is a first image area of the preview image. Therefore, the method can provide the region options for the user in an automatic identification mode of the electronic equipment, reduce the time for determining the first image region, improve the convenience for changing the operation of the reality effect and meet the user requirements.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 8 is a schematic flow chart of the image processing method provided in the embodiment of the present application, as shown in fig. 8, the method at least includes the following steps:
step S801, a first acquired image is obtained through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
step S802, determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region; the second image region is determined based on the first image region;
step S803 of obtaining an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area;
step S804, determining the second image area and determining a target calculation area participating in photometry;
illustratively, according to the photometry principle, the screen may be divided into several sub-areas, as shown in table 1, of which the areas belonging to the target calculation area are shown in table 2.
TABLE 1 brightness of different sub-regions
L11 L12 L13 L14 L15 L16 L17 L18
L21 L22 L23 L24 L25 L26 L27 L28
L31 L32 L33 L34 L35 L36 L37 L38
L41 L42 L43 L44 L45 L46 L47 L48
L51 L52 L53 L54 L55 L56 L57 L58
L61 L62 L63 L64 L65 L66 L67 L68
L71 L72 L73 L74 L75 L76 L77 L78
L81 L82 L83 L84 L85 L86 L87 L88
TABLE 2 target calculation of sub-region luminance in region
Step S805 of obtaining brightness based on the content of each sub-region in the target calculation region;
illustratively, each sub-region calculates a respective luminance from the image content, and as shown in table 2, L represents a luminance value in a different sub-region.
Step S806, obtaining exposure time based on the weight of each sub-area in the target calculation area; the exposure time is the second shooting parameter;
illustratively, the luminance values of the different sub-regions in table 1 are different, but the weight value of each sub-region is determined, as shown in table 3, W represents the weight in the different sub-regions.
Table 3 weights of different subregions
The weights for the sub-regions from table 2 are determined in table 3.
In one implementation, the obtaining the exposure time based on the weight of each sub-region in the target calculation region includes: determining the brightness of the target calculation region based on the weight and brightness of each sub-region in the target calculation region; the exposure time is re-obtained based on the brightness of the target calculation region.
Illustratively, the luminance R 'of the target calculation region is determined by the formula (1)' L
R’ L =L 32 W 32 +L 33 W 33 +…+L 88 W 88 Formula (1);
target brightness T L The re-obtained exposure time is E ', and the exposure time E' can be calculated by the formula (2):
E’=T L /R’ L formula (2);
in one implementation, the brightness R of the entire preview image L The calculation can be performed by means of the formula 3,
R L =L 11 W 11 +L 12 W 12 +…+L 88 W 88 formula (3);
under a given scene, the target brightness is a constant value, which is set as T L If the exposure time is E, the exposure time of the preview image of the target calculation region is not set and can be calculated by the formula (4):
E=T L /R’ L formula (4);
in the above-described process, after the target calculation region is set, the exposure time of the image may be changed. If the non-target calculation region is a portion other than the subject of photographing, the exposure time is changed toward a direction that facilitates the brightness to the target calculation region to reach the desired brightness.
Step S807, obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image.
In the above embodiment, determining the second image area determines a target calculation area that participates in photometry; obtaining brightness based on the content of each sub-region in the target calculation region; obtaining exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter. Therefore, the exposure time can be calculated by determining the target calculation area and selecting the local area, the influence of the overexposure area on the brightness of the acquired image is avoided, and the requirement of a user for adjusting the display effect of the preview image is met.
The image processing method provided by the application is used for enabling the image to reach a set target brightness T in any one of the first image area positions and sizes under any one of the light measuring modes L . In case that the area involved in the calculation of the exposure time is smaller than the preview image area, the brightness value R calculated by weighting each sub-area involved in the calculation of the exposure time in the image L Multiplying the exposure time E is the target brightness. Before dividing a first image area by the preview image, determining exposure time required for reaching target brightness by each subarea according to the weight of each subarea; after dividing the first image region, the target computing sub-region determines the exposure time required to reach the target brightness based on the weight of each sub-region in the target computing region.
The image processing method provided by the application comprises the following application scenes: 1) Professional shooting: in combination with different photometric modes, a highly customizable photometric experience is provided. 2) AR decal: and masking part of the shooting scene by using the sticker, placing the sticker in a first image area with large interference to the whole photometry in the shooting scene by using an artificial intelligence algorithm AI, and removing the area in the process of calculating the exposure time at the same time, so that the exposure of the preview image is improved towards the direction favorable for the whole image. 3) Picture-in-picture: the sub-window of the picture-in-picture is utilized to cover part of shooting scenes in the video call process, the sub-window is placed in a first image area with large overall photometric interference in the shooting scenes by utilizing an artificial intelligence algorithm AI, and meanwhile, the area is removed in the process of calculating the exposure time, so that the exposure of the preview image is improved towards the direction favorable for the overall image. 4) Watermarking: and covering part of the shooting scene by using the watermark, placing the watermark in a first image area with large interference to the whole photometry in the shooting scene, and removing the area from the photometry so as to ensure that the exposure is more real.
Based on the foregoing embodiments, the embodiments of the present application further provide an image processing apparatus, where the control apparatus includes each module included, and each unit included in each module may be implemented by a processor in an electronic device; of course, the method can also be realized by a specific logic circuit; in practice, the processor may be a central processing unit (Central Processing Unit, CPU), microprocessor (Micro Processing Unit, MPU), digital signal processor (Digital Signal Processor, DSP) or field programmable gate array (Field Programmable Gate Array, FPGA), etc.
Fig. 9 is a schematic structural diagram of an image processing apparatus provided in the embodiment of the present application, as shown in fig. 9, the apparatus 900 includes an acquisition module 901, a processing module 902, and an acquisition module 903, where:
the acquisition module 901 is used for acquiring a first acquisition image through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
The processing module 902 is configured to determine the first collected image as a preview image, and display the preview image; the preview image includes a first image region and a second image region different from the first image region;
the acquiring module 903 is configured to acquire an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area; and responding to the input operation to obtain a second shooting parameter.
In some possible embodiments, the second image region is determined based on the first image region.
In some possible embodiments, the obtaining module 903 is further configured to: obtaining a position of a display object layered on the preview image; the position of the display object is changed based on an input operation; and determining the area of the preview image covered by the display object as a first image area based on the position of the display object.
In some possible embodiments, the display object is adjustable in area and/or shape; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart.
In some possible embodiments, the obtaining module 903 is further configured to: obtaining an input track; and determining a partial region of the preview image corresponding to the input track as a first image region based on the input track.
In some possible embodiments, the obtaining module 903 is further configured to: processing the first output image based on an identification algorithm to obtain at least two image areas in the first input image; displaying the indication items of the at least two image areas in the preview image; determining a target image area based on a selection operation for the indication item; wherein the target image area is a first image area of the preview image.
In some possible embodiments, the obtaining module 903 is further configured to: determining the second image area and determining a target calculation area participating in photometry; obtaining brightness based on the content of each sub-region in the target calculation region; obtaining exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter.
It should be noted here that: the description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
In the embodiment of the present application, if the image processing method is implemented in the form of a software functional module and sold or used as a separate product, the image processing method may also be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or part contributing to the related art, and the computer software product may be stored in a storage medium, and includes several instructions to cause a terminal (which may be a smart phone with a camera, a tablet computer, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the image processing methods described in the above embodiments.
Correspondingly, in the embodiment of the application, a chip is also provided, and the chip includes a programmable logic circuit and/or program instructions, and when the chip runs, the chip is used for implementing the steps in the image processing method in any of the above embodiments.
Correspondingly, in an embodiment of the present application, there is also provided a computer program product for implementing the steps of the image processing method according to any of the above embodiments, when the computer program product is executed by a processor of a terminal.
In the above embodiment, the processor in the electronic device may be at least one of an application specific integrated circuit (application lication Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not specifically limited.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Read Only optical disk (Compact Disc Read-Only Memory, CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the related art, embodied in the form of a software product stored in a storage medium, including several instructions for causing an apparatus automatic test line to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. An image processing method, applied to an electronic device, comprising:
acquiring a first acquisition image through a camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters;
determining the first acquired image as a preview image, and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region;
Obtaining an input operation for determining the first image region; the input operation is used for changing the display effect of the second image area;
obtaining a second shooting parameter in response to the input operation;
obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are the second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
the obtaining an input operation for determining the first image area includes:
obtaining a position of a display object layered on the preview image; the position of the display object is changed based on an input operation; determining the area of the preview image covered by the display object as a first image area based on the position of the display object;
the area and/or shape of the display object is adjustable; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart.
2. The method of claim 1, wherein the second image region is determined based on the first image region.
3. The method of claim 2, wherein said obtaining a second photographing parameter in response to said input operation comprises:
determining the second image area and determining a target calculation area participating in photometry;
obtaining brightness based on the content of each sub-region in the target calculation region;
obtaining exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter.
4. An electronic device, the electronic device comprising:
the camera is used for obtaining a first acquired image and a second acquired image; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; the second acquired image is obtained when the shooting parameters of the camera are second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
the display screen is used for displaying the preview image and obtaining input operation for determining a first image area in the preview image; the preview image includes the first image region and a second image region different from the first image region; the input operation is used for changing the display effect of the second image area; the input operation includes obtaining a position of a display object layered on the preview image; the position of the display object is changed based on an input operation; determining the area of the preview image covered by the display object as a first image area based on the position of the display object; the area and/or shape of the display object is adjustable; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart;
A processor for determining the first captured image as the preview image; and responding to the input operation to obtain the second shooting parameters.
5. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first acquisition image through the camera; the first acquired image is obtained when the shooting parameters of the camera are first shooting parameters; obtaining a second acquired image through the camera; the second acquired image is obtained when the shooting parameters of the camera are second shooting parameters; the display effect of the second image area in the second acquired image is different from the display effect of the second image area in the first acquired image;
the processing module is used for determining the first acquired image as a preview image and displaying the preview image; the preview image includes a first image region and a second image region different from the first image region;
an acquisition module for acquiring an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area; obtaining a second shooting parameter in response to the input operation;
The acquisition module is further used for acquiring the position of the display object stacked on the preview image; the position of the display object is changed based on an input operation; determining the area of the preview image covered by the display object as a first image area based on the position of the display object; the area and/or shape of the display object is adjustable; the display object is a window, and the window is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart.
6. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, realizes the steps in the method of any one of claims 1 to 3.
CN202111673229.8A 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium Active CN114531551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673229.8A CN114531551B (en) 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673229.8A CN114531551B (en) 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114531551A CN114531551A (en) 2022-05-24
CN114531551B true CN114531551B (en) 2023-12-26

Family

ID=81621036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673229.8A Active CN114531551B (en) 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114531551B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979487B (en) * 2022-05-27 2024-06-18 联想(北京)有限公司 Image processing method and device, electronic equipment and storage medium
CN117560576B (en) * 2023-11-13 2024-05-07 四川新视创伟超高清科技有限公司 Exposure method and exposure system for focal plane area

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007124365A (en) * 2005-10-28 2007-05-17 Ricoh Co Ltd Imaging apparatus
CN103905709A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Electronic device control method and electronic device
CN105991915A (en) * 2015-02-03 2016-10-05 中兴通讯股份有限公司 Shooting method and apparatus, and terminal
CN106993139A (en) * 2017-04-28 2017-07-28 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium
CN109697814A (en) * 2017-10-20 2019-04-30 佳能株式会社 Equipment, control method and medium are set
CN110493538A (en) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
CN112637515A (en) * 2020-12-22 2021-04-09 维沃软件技术有限公司 Shooting method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190017303A (en) * 2017-08-10 2019-02-20 엘지전자 주식회사 Mobile terminal

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007124365A (en) * 2005-10-28 2007-05-17 Ricoh Co Ltd Imaging apparatus
CN103905709A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Electronic device control method and electronic device
CN105991915A (en) * 2015-02-03 2016-10-05 中兴通讯股份有限公司 Shooting method and apparatus, and terminal
CN106993139A (en) * 2017-04-28 2017-07-28 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109697814A (en) * 2017-10-20 2019-04-30 佳能株式会社 Equipment, control method and medium are set
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium
CN110493538A (en) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
CN112637515A (en) * 2020-12-22 2021-04-09 维沃软件技术有限公司 Shooting method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄开歌.《测光模式》.《人像摄影进行时 雕琢光线》.2016, *

Also Published As

Publication number Publication date
CN114531551A (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN104301624B (en) A kind of image taking brightness control method and device
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
JP6066536B2 (en) Generation of high dynamic range images without ghosting
CN114531551B (en) Image processing method and device, electronic equipment and storage medium
KR102638638B1 (en) Method for generating an hdr image of a scene based on a tradeoff between brightness distribution and motion
CN104349066B (en) A kind of method, apparatus for generating high dynamic range images
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN110445989B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111327824A (en) Shooting parameter selection method and device, storage medium and electronic equipment
CN114257738B (en) Automatic exposure method, device, equipment and storage medium
CN111405185B (en) Zoom control method and device for camera, electronic equipment and storage medium
CN110392211B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
CN105556957B (en) A kind of image processing method, computer storage media, device and terminal
CN110581957A (en) image processing method, image processing device, storage medium and electronic equipment
CN114827487B (en) High dynamic range image synthesis method and electronic equipment
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112422837B (en) Method, device, equipment and storage medium for synthesizing high dynamic range image
WO2016202073A1 (en) Image processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant