CN114531551A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN114531551A
CN114531551A CN202111673229.8A CN202111673229A CN114531551A CN 114531551 A CN114531551 A CN 114531551A CN 202111673229 A CN202111673229 A CN 202111673229A CN 114531551 A CN114531551 A CN 114531551A
Authority
CN
China
Prior art keywords
image
area
region
shooting parameter
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111673229.8A
Other languages
Chinese (zh)
Other versions
CN114531551B (en
Inventor
杨双新
徐卓然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202111673229.8A priority Critical patent/CN114531551B/en
Publication of CN114531551A publication Critical patent/CN114531551A/en
Application granted granted Critical
Publication of CN114531551B publication Critical patent/CN114531551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium, wherein the image processing method comprises the following steps: obtaining a first collected image through a camera; the first collected image is obtained when the shooting parameter of the camera is the first shooting parameter; determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area; obtaining an input operation for determining a first image region; the input operation is used for changing the display effect of the second image area; responding to the input operation to obtain a second shooting parameter; obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image region in the second captured image is different from the display effect of the second image region in the first captured image.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present application relates to the field of electronic device technology, and relates to, but is not limited to, an image processing method and apparatus, an electronic device, and a storage medium.
Background
Since the photographed image has a highlight or a dark area, the difference of the highlight or the dark area when the image is displayed seriously affects the display effect of the whole image. Both the highlight area and/or the dim area affect the user's view of the subject/object for the corresponding area.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in one aspect, an embodiment of the present application provides an image processing method, where the method includes:
obtaining a first collected image through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area;
obtaining an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area;
responding to the input operation to obtain a second shooting parameter;
obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
In another aspect, an embodiment of the present application provides an electronic device, including:
the camera is used for obtaining a first collected image and a second collected image; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter; the second collected image is obtained when the shooting parameter of the camera is a second shooting parameter; the display effect of the second image area in the second collected image is different from the display effect of the second image area in the first collected image;
the display screen is used for displaying a preview image and obtaining input operation for determining a first image area in the preview image; the preview image includes the first image region and a second image region different from the first image region; the input operation is used for changing the display effect of the second image area;
a processor for determining the first captured image as the preview image; and responding to the input operation to obtain the second shooting parameter.
In another aspect, an embodiment of the present application provides an image processing apparatus, including:
the acquisition module is used for acquiring a first acquired image through the camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter; obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second collected image is different from the display effect of the second image area in the first collected image;
the processing module is used for determining the first collected image as a preview image and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area;
an acquisition module for acquiring an input operation for determining the first image region; the input operation is used for changing the display effect of the second image area; and responding to the input operation to obtain a second shooting parameter.
In a further aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the method.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the embodiment of the application, an input operation for determining the first image area is obtained; the input operation is used for changing the display effect of the second image area; responding to the input operation to obtain a second shooting parameter; obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image. In this way, the image can be partitioned through the input operation, the display effect of different image areas is changed, and the effect of the different image areas in the whole image is controlled.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1A is a schematic diagram illustrating a matrix photometry method in the related art;
FIG. 1B is a schematic diagram of center weighted average light measurement in the related art;
FIG. 1C is a schematic diagram illustrating a spot metering method according to the related art;
fig. 2A is a schematic view of an application scenario of the center-weighted light metering method;
fig. 2B is a schematic view of an application scenario of a photometric mode selection interface in the related art;
FIG. 2C is a schematic diagram of an application scenario of a composite image in the related art;
fig. 3 is a hardware entity diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 6A is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 6B is a schematic view of an application scenario of an image processing method according to an embodiment of the present application;
fig. 6C is a schematic view of an application scenario of an image processing method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 8 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that the terms "first \ second \ third" referred to in the embodiments of the present application are only used for distinguishing similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under specific ordering or sequence if allowed, so that the embodiments of the present application described herein can be implemented in other orders than illustrated or described herein.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which embodiments of the present application belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In the related art, photometry is a method used by an electronic device for capturing an image to determine an exposure time, and commonly used photometry methods are averaging photometry, i.e., matrix photometry, center-weighted averaging photometry, spot photometry. FIG. 1A is a schematic diagram illustrating a matrix photometry method in the related art; as shown in fig. 1A, in the matrix photometry method, an image is divided into several areas in a vertical and horizontal direction, a central area 11 is used as a main photometry basis, and then averaged into the remaining areas. FIG. 1B is a schematic diagram of central weighted averaging photometry in the related art; as shown in fig. 1B, in the center weighted average photometry method, different weighting coefficients are calculated in the center 12 of the image, and the weight is the largest at the center 12 and smaller toward the edge of the screen, and the resulting value is the photometric value. FIG. 1C is a schematic diagram illustrating a spot metering method according to the related art; as shown in fig. 1C, in the spot metering method, metering is performed for one spot 13 without being affected by other areas.
Because the dynamic range of the electronic device for shooting the image is far lower than that of the actual shooting scene, no matter which light metering method is adopted, the problem that the effect of the shooting main body in the whole image is influenced because the participation degree of highlight or dark areas to light metering of the shot image is too large can be caused. Fig. 2A is a schematic view of an application scenario of the center-weighted metering method, and as shown in fig. 2A, a highlight region 21 outside a portrait has a great influence on metering, resulting in a shortened exposure time and an underexposure of the portrait 22.
To solve the above problems, two solutions are provided in the related art: 1) a selection menu of the photometry mode is provided. A selection menu of a metering method is provided in an electronic device for photographing an image, and a user is allowed to select a metering mode according to actual needs. Fig. 2B is a schematic view of an application scenario of a metering mode selection interface in the related art, as shown in fig. 2B, a metering mode selection menu 23 includes a central weighted average metering method, a matrix metering method, and a metering-in-metering method, and a user can click on the selection menu to select a metering mode.
2) And the brightness of a highlight area is suppressed and the brightness of a dark area is improved by adopting a composite image (HDR) technology.
The following problems exist with the above method of selecting a menu: many shooting users do not understand the meaning of photometry, and in order to achieve the best shooting experience, the electronic device usually defaults to a center-weighted photometry method, and automatically switches to face-weighted photometry after a human image is detected. Only in professional mode will the control of the metering mode be given to the user entirely. Even if the user selects the photometry mode, if the luminance of the non-subject area in the screen is at both extremes, in the case where the non-subject area has a small weight, the photometry is greatly affected, resulting in the photographic subject not being prominent.
The following problems exist for the approach using HDR technology: first, there are cases where the images being combined are not uniform. The HDR technique combines three images into one image, and since unknown differences may exist between elements of the three images, there may be inconsistency between the combined images. For example, when a scene has moving elements, a hold-off effect may occur. Second, the HDR technique does not really increase the dynamic range of the image, and in the process of HDR suppressing the brightness of the highlight region and increasing the brightness of the dark region, the contrast of the region between the highlight region and the dark region is squeezed. As shown in fig. 2C, HDR imaging can actually achieve the purpose of suppressing the highlight region 24 and increasing the brightness of the human body, compared to normal imaging, but at the cost of a decrease in contrast of the region 25.
In order to solve the above problem, the present application provides an image processing method applied to an electronic device, and before introducing the image processing method provided in the embodiment of the present application, a hardware entity schematic diagram of the electronic device is introduced, and as shown in fig. 3, a hardware entity schematic diagram of the electronic device provided in the embodiment of the present application is shown, where the electronic device 300 includes a camera 301, a display 302, and a processor 303:
the camera 301 is configured to obtain a first collected image and a second collected image; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter; the second collected image is obtained when the shooting parameter of the camera is a second shooting parameter; the display effect of the second image area in the second collected image is different from the display effect of the second image area in the first collected image;
a display screen 302, configured to display a preview image, and obtain an input operation for determining a first image area in the preview image; the preview image includes the first image region and a second image region different from the first image region; the input operation is used for changing the display effect of the second image area;
a processor 303 for determining the first captured image as the preview image; and responding to the input operation to obtain the second shooting parameter.
In one implementation, the second image region is determined based on the first image region.
In one implementation, the processor 303 is further configured to: obtaining the position of a display object stacked on the preview image; the position of the display object changes based on an input operation; wherein the area of the preview image covered by the display object is determined to be a first image area based on the position of the display object.
In one implementation, the area and/or shape of the display object is adjustable; the display object is a window which is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call counterpart.
In one implementation, the processor 303 is further configured to: obtaining an input track; wherein the partial area of the preview image corresponding to the input trajectory is determined to be a first image area based on the input trajectory.
In one implementation, the processor 303 is further configured to: processing the first output image based on a recognition algorithm to obtain at least two image areas in the first input image; displaying an indicator of the at least two image areas in the preview image; determining a target image area based on the selection operation for the indicator; wherein the target image area is a first image area of the preview image.
In one implementation, the processor 303 is further configured to: determining a target calculation area for the second image area to participate in photometry; obtaining a brightness based on the content of each sub-region in the target calculation region; obtaining an exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, fig. 4 is a schematic flowchart of an image processing method provided in an embodiment of the present application, and is applied to an electronic device, as shown in fig. 4, the method at least includes the following steps:
step S401, a first collected image is obtained through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
here, the camera may include a lens and a photosensitive component. And the photosensitive component is adjusted according to the shooting parameters. For example, the photosensitive component adjusts the exposure length when acquiring the image according to the exposure parameters.
Here, the first captured image is an image acquired by the camera in real time. Here, the first photographing parameters may include: aperture and exposure time; controlling the depth of field of the shot image through the aperture; controlling the brightness of the image by the exposure time.
Step S402, determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area;
here, the process of determining the first captured image as a preview image and displaying the preview image is in real time.
Here, the first image region may be a region selected by a user in a preview image. In one implementation manner, a user selects a first image area in a preview image, and after the first image area is determined, the part of the preview image except the first image area is determined as a second image area.
Illustratively, as shown in fig. 2C, the user determines the region 24 as the first image region, and then the region other than the region 24 in the preview image is determined as the second image region.
Step S403, obtaining an input operation for determining the first image region; the input operation is used for changing the display effect of the second image area;
here, the input operation is used to determine the first image area, and the input operation includes multiple types, which may be manually input by a user or may be a manner in which the user adjusts a preset area. The preset area may include multiple types, and may be an area of a video window stacked in a preset image, an area of an AR sticker, or an area of a watermark.
In an implementation manner, the changing of the display effect of the second image area through the input operation may be: and adjusting the region participating in photometry in the preview image through input operation, so as to adjust the shooting parameters and further adjust the display effect of the second region.
In an implementation manner, the display effect of the second image region may be brightness, and before changing the brightness of the second image region, the brightness of the second image region may be greater than or less than an average brightness of the preview image. By changing the brightness of the second image area, the brightness of the second image area can be made brighter, or darker.
In one implementation, the preview image is subjected to photometry, the first image area is determined as a photometry removal area, and after the first image area is removed, the brightness of the preview image may become bright or dark, and the brightness change of the preview image is related to the brightness of the first image area.
For example, if the brightness of the first image area is large, the exposure time of the preview image becomes long after the first image area is removed, and the preview image becomes bright as a whole.
For example, if the brightness of the first image area is small, the exposure time of the preview image becomes short and the preview image becomes dark as a whole after the first image area is removed.
Step S404, responding to the input operation to obtain a second shooting parameter;
here, the second shooting parameter may be an exposure duration, and the length of exposure when sensing components in the camera adjust the collected image based on the exposure duration.
Step S405, obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
Here, "first" and "second" in the first captured image and the second captured image are used to distinguish captured images obtained by a camera in the case of different shooting parameters, and are not used to define a time relationship when different captured images are obtained.
Here, a frame image of an arbitrary frame may be included between the first captured image and the second captured image. Illustratively, the camera acquires N frames of images in real time, the first acquired image may be a 1 st frame of image in the N frames of images, the second acquired image may be an N-1 th frame of image in the N frames of images, and a frame of image of N-2 frames is included between the first acquired image and the second acquired image, where N may be any positive integer greater than or equal to 3.
Here, the display effect may be a brightness of the image, and after the photographing parameters are adjusted from the first photographing parameters to the second photographing parameters, the brightness of the second image area in the first captured image is different from the brightness of the second image area in the second captured image.
Here, the second captured image may be a preview image or an image stored in a specified format. In an implementation manner, when the user browses the preview image, a photographing action is performed, and the electronic device stores the second captured image in the memory, where the storage format of the second captured image may be a JPG/RAW format. In an implementation manner, the second shooting parameter is effective for both the preview image and the photographed image obtained after the photographing action is performed.
In the above embodiment, the input operation for determining the first image region is obtained; the input operation is used for changing the display effect of the second image area; responding to the input operation to obtain a second shooting parameter; obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image. In this way, the image can be partitioned through the input operation, the display effect of different image areas is changed, and the effect of the different image areas in the whole image is controlled.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, fig. 5 is a schematic flowchart of the image processing method provided in the embodiment of the present application, and as shown in fig. 5, the method at least includes the following steps:
step S501, a first collected image is obtained through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
step S502, determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area;
step S503, obtaining a position of a display object layered on the preview image; the position of the display object changes based on an input operation; wherein the area of the preview image covered by the display object is determined to be a first image area based on the position of the display object; the input operation is used for changing the display effect of the second image area;
here, the display object may be a video window layered on a preview image, and a position of the display object is changed based on an input operation, that is, the display object may be dragged, and after the dragging, the display object may be located at any position in the preview image.
In one implementation, the display object may be used to block an area in the preset image where the display effect does not meet the user's desired effect. For example, the user may occlude areas that are too bright or, alternatively, too dark in the preview image.
Step S504, responding to the input operation to obtain a second shooting parameter;
step S505, a second collected image is obtained through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
In one implementation, the second image region is determined based on the first image region.
Here, the first image region is dynamically variable, and the second image region is determined according to a change in the first image region.
In one implementation, the area and/or shape of the display object is adjustable; the display object is a window which is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call opposite side.
Illustratively, a user a calls a user B to perform a video call, the video frame image of the user a is of a large window, the video frame image of the user B is of a small window, and the adjustment of the display effect, for example, the adjustment of the brightness, is performed by dragging the position of the small window in the large window.
Illustratively, a user A calls a user B to carry out a video call, a video frame image of the user A is in a small window, a video frame image of the user B is in a large window, the brightness of a portrait is too low when the user A judges that the user B is displayed in the large window, the brightness of a preview image in the large window is adjusted by dragging the small window, so that the brightness of the portrait of the user B in the large window is improved, under the condition that the brightness is improved to a target value, the position of the small window in the large window and the size of the small window are sent to the user B, so that electronic equipment used by the user B adjusts the brightness of the preview image in which the user B is located according to the received position and size, and a video frame image with the portrait meeting the target value is output. In the above process, in response to the second photographing parameter, the second captured image is obtained by the electronic device no longer but by another device. In the above process, the first captured image is captured by the electronic device used by the user B; and transmitting the first collected image to the electronic equipment used by the user A, obtaining a second shooting parameter after the user A carries out brightness adjustment operation, shooting a second collected image according to the second shooting parameter, and displaying the second collected image by the user A. In the process, the camera module is adjusted according to the exposure parameters calculated by the opposite device, the exposure parameters are calculated according to the position of the small window in the large window and the size of the small window determined by the opposite electronic device, and the camera module is adjusted according to the calculated exposure parameters.
In an implementation manner, when the window is used to display an image collected by another camera of the electronic device, the two cameras may be a front camera and a rear camera, respectively, and one of the two cameras is a main camera, the main camera is a large window, the other camera is a small window, and the small window is stacked on the large window. The area, shape and position of the small window through which the first image region can be determined is adjustable.
In the above embodiment, on the one hand, the second image region is determined based on the first image region. Therefore, the first image area and the second image area can be dynamically changed, the adjustment by a user is facilitated, and the requirement of the user for adjusting the display effect of the preview image is met.
On the other hand, obtaining a position of a display object layered on the preview image; the position of the display object changes based on an input operation; wherein the area of the preview image covered by the display object is determined to be a first image area based on the position of the display object. In this way, the partial area in the preview image can be selectively covered by adjusting the display object, and the requirement of the user for adjusting the display effect of the preview image is met.
In yet another aspect, the area and/or shape of the display object is adjustable; the display object is a window which is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call opposite side. Therefore, partial area in the preview image can be covered by the window, and the requirement of the user for adjusting the display effect of the preview image is met by adjusting the area and/or the shape of the display object.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 6A is a schematic flowchart of the image processing method provided in the embodiment of the present application, and as shown in fig. 6A, the method at least includes the following steps:
step S601, obtaining a first collected image through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
step S602, determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area; the second image region is determined based on the first image region;
step S603 obtains an input trajectory; determining a partial area of the preview image corresponding to the input track as a first image area based on the input track; the input operation is used for changing the display effect of the second image area;
here, the input trajectory may be a trajectory when an area is defined in the preview image in a process of defining the first image area by the user.
Illustratively, as shown in fig. 6B, the input trajectory may be a trajectory shown by a dotted line when the user demarcates the first image area 61.
Step S604, responding to the input operation to obtain a second shooting parameter;
step S605, obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
In one implementation, after the selected area 61, a part of the non-selected area 61 is determined as a second image area of the first captured image, a second shooting parameter is obtained based on the second image area, and a second captured image is obtained by shooting according to the second shooting parameter, and the obtained second captured image is as shown in fig. 6C, the part of the non-selected area 61 is the second image area of the second captured image, and the brightness of the second image area of the second captured image is higher than that of the second image area of the first captured image.
In the above embodiment, the input trajectory is obtained; wherein the partial area of the preview image corresponding to the input trajectory is determined to be a first image area based on the input trajectory. Therefore, the first image area can be customized, the edge of the first image area is closer to the expectation of a user, the area shape meeting the requirements of the user is obtained, and the requirement of the user for adjusting the display effect of the preview image is met.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 7 is a schematic flowchart of the image processing method provided in the embodiment of the present application, and as shown in fig. 7, the method at least includes the following steps:
step S701, obtaining a first collected image through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
step S702, determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area; the second image region is determined based on the first image region;
step S703, processing the first output image based on a recognition algorithm to obtain at least two image areas in the first input image;
here, the recognition algorithm may include a face recognition algorithm, or a depth recognition algorithm. The face recognition algorithm is used for recognizing a face in the preview image. In one implementation, the non-face region may be determined as the first image region. The depth recognition algorithm is used to identify the foreground region in the preview image, and in one implementation, the non-foreground region may be determined as the first image region.
Step S704, displaying the indication items of the at least two image areas in the preview image;
here, the indication item may be an identification of an area; the selection operation based on the indication item can be to select the area where the identification is located; the selection operation may be a touch operation on the display screen detected by the electronic device; the at least two image regions may be of regular shape, e.g. rectangular; it may also be an irregular shape, for example, a shape like an edge contour of a portrait.
Illustratively, according to a face recognition algorithm, two regions are determined: the system comprises a face area and a non-face area, wherein the identification of the face area is an area 1; the identification of the non-face region is region 2; in the case where the region 1 is selected, the region 1 is determined as a target image region, which is a first image region. In this case, the region 2 is the second image region.
Step S705 of determining a target image area based on the selection operation for the indicator; wherein the target image area is a first image area of the preview image; the input operation is used for changing the display effect of the second image area;
illustratively, when the non-face region is determined as the target image region, the non-face region is determined as the first image region of the preview image.
Step S706, responding to the input operation to obtain a second shooting parameter;
step S707, acquiring a second acquired image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
In the above embodiment, the first output image is processed based on a recognition algorithm to obtain at least two image regions in the first input image; displaying an indicator of the at least two image areas in the preview image; determining a target image area based on the selection operation for the indicator; wherein the target image area is a first image area of the preview image. Therefore, the area options can be provided for the user in an automatic identification mode of the electronic equipment, the time for determining the first image area is shortened, the convenience for changing the operation of the real effect is improved, and the user requirements are met.
Based on the electronic device shown in fig. 3, the present application provides an image processing method, and fig. 8 is a schematic flowchart of the image processing method provided in the embodiment of the present application, and as shown in fig. 8, the method at least includes the following steps:
step S801, obtaining a first collected image through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
step S802, determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area; the second image region is determined based on the first image region;
step S803, obtaining an input operation for determining the first image region; the input operation is used for changing the display effect of the second image area;
step S804, determining that the second image region determines a target calculation region participating in photometry;
illustratively, the screen may be divided into several sub-regions according to the principle of photometry, as shown in table 1, and among these sub-regions, the region belonging to the target calculation region is the region shown in table 2.
TABLE 1 luminance of different subregions
L11 L12 L13 L14 L15 L16 L17 L18
L21 L22 L23 L24 L25 L26 L27 L28
L31 L32 L33 L34 L35 L36 L37 L38
L41 L42 L43 L44 L45 L46 L47 L48
L51 L52 L53 L54 L55 L56 L57 L58
L61 L62 L63 L64 L65 L66 L67 L68
L71 L72 L73 L74 L75 L76 L77 L78
L81 L82 L83 L84 L85 L86 L87 L88
TABLE 2 luminance of sub-regions in a target calculation region
Figure BDA0003453617240000151
Step S805, obtaining brightness based on the content of each sub-area in the target calculation area;
illustratively, each sub-region calculates its own brightness from the image content, as shown in table 2, where L represents the brightness value in the different sub-regions.
Step S806, obtaining exposure time based on the weight of each sub-area in the target calculation area; the exposure time is the second shooting parameter;
illustratively, the luminance values of the different sub-regions in table 1 are different, but the weight value of each sub-region is determined, as shown in table 3, W represents the weight in the different sub-regions.
TABLE 3 weights of different subregions
Figure BDA0003453617240000152
Figure BDA0003453617240000161
The weights for the sub-regions from table 2 are determined in table 3.
In one implementation, the obtaining the exposure time based on the weight of each sub-region in the target calculation region includes: determining the brightness of the target calculation region based on the weight and the brightness of each sub-region in the target calculation region; the exposure time is reacquired based on the brightness of the target calculation region.
Illustratively, the luminance R 'of the target computing region is determined by equation (1)'L
R’L=L32W32+L33W33+…+L88W88Formula (1);
target brightness of TLAnd the regained exposure time is E ', the exposure time E' can be calculated by equation (2):
E’=TL/R’Lformula (2);
in one implementation, the brightness R of the entire preview imageLThe calculation can be made by the equation 3,
RL=L11W11+L12W12+…+L88W88formula (3);
in a given scene, the target brightness is a constant value, set as TLIf the exposure time is E, the exposure time of the preview image of the unset target calculation region can be calculated by the following formula (4):
E=TL/R’Lformula (4);
in the above process, after the target calculation region is set, the exposure time of the image may be changed. If the non-target calculation region is a portion other than the photographic subject, the exposure time is changed toward a direction advantageous for reaching a desired luminance toward the luminance of the target calculation region.
Step S807, obtaining a second captured image by the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
In the above embodiment, determining the second image area determines a target calculation area that participates in photometry; obtaining a brightness based on the content of each sub-region in the target calculation region; obtaining an exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter. Therefore, the exposure time can be calculated by determining the target calculation area and selecting the local area, the influence of the over-exposure area on the brightness of the acquired image is avoided, and the requirement of a user for adjusting the display effect of the preview image is met.
In any photometric mode, the image processing method provided by the application aims at enabling the image to reach a set target brightness T at any first image area position and sizeL. In the case where the area involved in the exposure time calculation is smaller than the preview image area, the brightness value R calculated by weighting each of the sub-areas involved in the exposure time calculation in the imageLThe exposure time E is multiplied by the target brightness. Before the preview image is divided into the first image area, determining the exposure time required by reaching the target brightness by each sub-area according to the weight of each sub-area; after the first image region is divided, the target calculation sub-region determines the exposure time required to reach the target brightness based on the weight of each sub-region in the target calculation region.
The image processing method provided by the application comprises the following application scenes: 1) professional shooting: combine together with different photometry modes, provide the photometry experience that highly can be customized. 2) AR paster: the method comprises the steps of shooting a scene by utilizing a part covered by a paster, placing the paster in a first image area with large interference on the whole light measurement in the shooting scene by utilizing an artificial intelligence algorithm AI, and simultaneously removing the area in the exposure time calculation process, so that the exposure of a preview image is improved towards the direction beneficial to the whole image. 3) Picture in picture: the method comprises the steps of covering a part of a shooting scene by using a picture-in-picture sub-window in the video call process, placing the sub-window in a first image area which has large interference on the whole light measurement in the shooting scene by using an artificial intelligence algorithm AI, and removing the area in the exposure time calculation process, so that the exposure of a preview image is improved towards the direction which is beneficial to the whole image. 4) Watermarking: a part of the shot scene is covered by the watermark, the watermark is placed in a first image area which has large interference on the whole photometry in the shot scene, and the area is removed in the photometry, so that the exposure is more real.
Based on the foregoing embodiments, there is provided another image processing apparatus in an embodiment of the present application, where the control apparatus includes modules and units included in the modules, and may be implemented by a processor in an electronic device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the Processor may be a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present application, and as shown in fig. 9, the apparatus 900 includes an acquisition module 901, a processing module 902, and an acquisition module 903, where:
the acquisition module 901 is configured to obtain a first acquired image through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter; obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second collected image is different from the display effect of the second image area in the first collected image;
the processing module 902 is configured to determine the first captured image as a preview image, and display the preview image; the preview image includes a first image area and a second image area different from the first image area;
the obtaining module 903 is configured to obtain an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area; and responding to the input operation to obtain a second shooting parameter.
In some possible embodiments, the second image region is determined based on the first image region.
In some possible embodiments, the obtaining module 903 is further configured to: obtaining a position of a display object layered on the preview image; the position of the display object changes based on an input operation; wherein the area of the preview image covered by the display object is determined to be a first image area based on the position of the display object.
In some possible embodiments, the area and/or shape of the display object is adjustable; the display object is a window which is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call opposite side.
In some possible embodiments, the obtaining module 903 is further configured to: obtaining an input track; wherein the partial area of the preview image corresponding to the input trajectory is determined to be a first image area based on the input trajectory.
In some possible embodiments, the obtaining module 903 is further configured to: processing the first output image based on a recognition algorithm to obtain at least two image areas in the first input image; displaying an indicator of the at least two image areas in the preview image; determining a target image area based on the selection operation for the indicator; wherein the target image area is a first image area of the preview image.
In some possible embodiments, the obtaining module 903 is further configured to: determining a target calculation area for the second image area to participate in photometry; obtaining a brightness based on the content of each sub-region in the target calculation region; obtaining an exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter.
Here, it should be noted that: the above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
In the embodiment of the present application, if the image processing method is implemented in the form of a software functional module and sold or used as a standalone product, the image processing method may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application or portions of the technical solutions that contribute to the related art may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes several instructions for enabling a terminal (which may be a smartphone with a camera, a tablet computer, or the like) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the image processing method according to any of the above embodiments.
Correspondingly, in an embodiment of the present application, a chip is further provided, where the chip includes a programmable logic circuit and/or program instructions, and when the chip runs, the chip is configured to implement the steps in any of the image processing methods in the foregoing embodiments.
Correspondingly, in the embodiment of the present application, a computer program product is further provided, and when the computer program product is executed by a processor of a terminal, the computer program product is used for implementing the steps in the image processing method in any one of the foregoing embodiments.
In the above embodiments, the Processor in the electronic Device may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It is understood that the electronic device implementing the above-described processor function may be other electronic devices, and the embodiments of the present application are not limited in particular.
The computer storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), and the like; but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application, in essence or parts contributing to the related art, may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes several instructions for causing an automatic test line of a device to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
obtaining a first collected image through a camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter;
determining the first collected image as a preview image, and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area;
obtaining an input operation for determining the first image area; the input operation is used for changing the display effect of the second image area;
responding to the input operation to obtain a second shooting parameter;
obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second captured image is different from the display effect of the second image area in the first captured image.
2. The method of claim 1, wherein the second image region is determined based on the first image region.
3. The method of claim 2, wherein the obtaining the input operation for determining the first image region comprises:
obtaining the position of a display object stacked on the preview image; the position of the display object changes based on an input operation;
wherein,
and determining the area of the preview image covered by the display object as a first image area based on the position of the display object.
4. The method of claim 3, wherein the display object is adjustable in area and/or shape;
the display object is a window which is used for displaying an image acquired by another camera of the electronic equipment; or the window is used for displaying continuous video frame images sent by the video call opposite side.
5. The method of claim 2, wherein the obtaining the input operation for determining the first image region comprises:
obtaining an input track;
wherein,
and determining a partial area of the preview image corresponding to the input track as a first image area based on the input track.
6. The method of claim 2, wherein the obtaining the input operation for determining the first image region comprises:
processing the first output image based on a recognition algorithm to obtain at least two image areas in the first input image;
displaying an indicator of the at least two image areas in the preview image;
determining a target image area based on the selection operation for the indicator;
wherein,
the target image area is a first image area of the preview image.
7. The method of claim 2, wherein the obtaining second shooting parameters in response to the input operation comprises:
determining a target calculation area for the second image area to participate in photometry;
obtaining a brightness based on the content of each sub-region in the target calculation region;
obtaining an exposure time based on the weight of each sub-region in the target calculation region; the exposure time is the second shooting parameter.
8. An electronic device, characterized in that the electronic device comprises:
the camera is used for obtaining a first collected image and a second collected image; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter; the second collected image is obtained when the shooting parameter of the camera is a second shooting parameter; the display effect of the second image area in the second collected image is different from the display effect of the second image area in the first collected image;
the display screen is used for displaying a preview image and obtaining input operation for determining a first image area in the preview image; the preview image includes the first image region and a second image region different from the first image region; the input operation is used for changing the display effect of the second image area;
a processor for determining the first captured image as the preview image; and responding to the input operation to obtain the second shooting parameter.
9. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a first acquired image through the camera; the first collected image is obtained when the shooting parameter of the camera is a first shooting parameter; obtaining a second collected image through the camera; the second collected image is obtained when the shooting parameter of the camera is the second shooting parameter; the display effect of the second image area in the second collected image is different from the display effect of the second image area in the first collected image;
the processing module is used for determining the first collected image as a preview image and displaying the preview image; the preview image includes a first image area and a second image area different from the first image area;
an acquisition module for acquiring an input operation for determining the first image region; the input operation is used for changing the display effect of the second image area; and responding to the input operation to obtain a second shooting parameter.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202111673229.8A 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium Active CN114531551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673229.8A CN114531551B (en) 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673229.8A CN114531551B (en) 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114531551A true CN114531551A (en) 2022-05-24
CN114531551B CN114531551B (en) 2023-12-26

Family

ID=81621036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673229.8A Active CN114531551B (en) 2021-12-31 2021-12-31 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114531551B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979487A (en) * 2022-05-27 2022-08-30 联想(北京)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN117560576A (en) * 2023-11-13 2024-02-13 四川新视创伟超高清科技有限公司 Exposure method and exposure system for focal plane area

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007124365A (en) * 2005-10-28 2007-05-17 Ricoh Co Ltd Imaging apparatus
CN103905709A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Electronic device control method and electronic device
CN105991915A (en) * 2015-02-03 2016-10-05 中兴通讯股份有限公司 Shooting method and apparatus, and terminal
CN106993139A (en) * 2017-04-28 2017-07-28 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium
US20190052790A1 (en) * 2017-08-10 2019-02-14 Lg Electronics Inc. Mobile terminal
CN109697814A (en) * 2017-10-20 2019-04-30 佳能株式会社 Equipment, control method and medium are set
CN110493538A (en) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
CN112637515A (en) * 2020-12-22 2021-04-09 维沃软件技术有限公司 Shooting method and device and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007124365A (en) * 2005-10-28 2007-05-17 Ricoh Co Ltd Imaging apparatus
CN103905709A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Electronic device control method and electronic device
CN105991915A (en) * 2015-02-03 2016-10-05 中兴通讯股份有限公司 Shooting method and apparatus, and terminal
CN106993139A (en) * 2017-04-28 2017-07-28 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
US20190052790A1 (en) * 2017-08-10 2019-02-14 Lg Electronics Inc. Mobile terminal
CN109697814A (en) * 2017-10-20 2019-04-30 佳能株式会社 Equipment, control method and medium are set
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium
CN110493538A (en) * 2019-08-16 2019-11-22 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110691199A (en) * 2019-10-10 2020-01-14 厦门美图之家科技有限公司 Face automatic exposure method and device, shooting equipment and storage medium
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium
CN112637515A (en) * 2020-12-22 2021-04-09 维沃软件技术有限公司 Shooting method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
庄开歌: "《人像摄影进行时 雕琢光线》", 31 August 2016, pages: 1 - 8 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979487A (en) * 2022-05-27 2022-08-30 联想(北京)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN117560576A (en) * 2023-11-13 2024-02-13 四川新视创伟超高清科技有限公司 Exposure method and exposure system for focal plane area
CN117560576B (en) * 2023-11-13 2024-05-07 四川新视创伟超高清科技有限公司 Exposure method and exposure system for focal plane area

Also Published As

Publication number Publication date
CN114531551B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN104301624B (en) A kind of image taking brightness control method and device
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110225248B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110248108B (en) Exposure adjustment and dynamic range determination method under wide dynamic state and related device
CN105812675B (en) Method for generating HDR images of a scene based on a compromise between luminance distribution and motion
CN104349066B (en) A kind of method, apparatus for generating high dynamic range images
CN111327824B (en) Shooting parameter selection method and device, storage medium and electronic equipment
RU2562918C2 (en) Shooting device, shooting system and control over shooting device
CN110445989B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111028190A (en) Image processing method, image processing device, storage medium and electronic equipment
CN110213494B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN114531551B (en) Image processing method and device, electronic equipment and storage medium
JP6218389B2 (en) Image processing apparatus and image processing method
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2007126707A1 (en) Varying camera self-determination based on subject motion
CN105812670B (en) A kind of method and terminal taken pictures
CN114257738B (en) Automatic exposure method, device, equipment and storage medium
CN106791451B (en) Photographing method of intelligent terminal
CN110708463B (en) Focusing method, focusing device, storage medium and electronic equipment
CN111771372A (en) Method and device for determining camera shooting parameters
CN111246114A (en) Photographing processing method and device, terminal equipment and storage medium
CN112673311A (en) Method, software product, camera arrangement and system for determining artificial lighting settings and camera settings
CN110740266B (en) Image frame selection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant