CN113852752B - Photo taking method, photo taking device and storage medium - Google Patents

Photo taking method, photo taking device and storage medium Download PDF

Info

Publication number
CN113852752B
CN113852752B CN202010600401.6A CN202010600401A CN113852752B CN 113852752 B CN113852752 B CN 113852752B CN 202010600401 A CN202010600401 A CN 202010600401A CN 113852752 B CN113852752 B CN 113852752B
Authority
CN
China
Prior art keywords
depth
field
shot
original image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010600401.6A
Other languages
Chinese (zh)
Other versions
CN113852752A (en
Inventor
谢俊麒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010600401.6A priority Critical patent/CN113852752B/en
Publication of CN113852752A publication Critical patent/CN113852752A/en
Application granted granted Critical
Publication of CN113852752B publication Critical patent/CN113852752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Exposure Control For Cameras (AREA)

Abstract

The present disclosure relates to a photograph taking method, a photograph taking device and a storage medium. The photo shooting method is applied to a terminal, the terminal supports portrait mode shooting, and the photo shooting method comprises the following steps: controlling a first camera device and a second camera device of the terminal to respectively shoot in response to shooting a picture and adjusting the depth of field of a user in a portrait mode, and determining the adjusted first depth of field, wherein the aperture of the first camera device is larger than that of the second camera device; determining a second depth of field, wherein the second depth of field is the depth of field corresponding to the original image shot by the first camera device and the original image shot by the second camera device; and performing depth of field extension on the second depth of field to obtain a photo conforming to the adjusted first depth of field. The depth of field range can be adjusted through the method and the device, the picture with large depth of field range can be obtained, and the effect of extending the depth of field can be achieved.

Description

Photo taking method, photo taking device and storage medium
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a photo shooting method, a photo shooting device and a storage medium.
Background
The terminal with the photo shooting function is popular, so that convenience is brought to people to shoot photos anytime and anywhere, and the photos become a part of daily life of people.
In the related art, a smart phone provides a portrait mode (Bokeh) when photographing. When photographing in the portrait mode, depth information is calculated through double-shot (main shot and auxiliary shot) stereo parallax, and the effect of blurring a large aperture background is achieved. In the portrait mode, blurring of the background is performed on the basis of the maximum depth of field of the main camera, and the effect of a large aperture is simulated. As the size (sensor size) and aperture of the main camera of the smart phone are larger and larger, the depth of field range of the smart phone shooting is smaller and smaller. In the portrait mode, the minimum aperture (maximum depth of field) is limited by the main photographing depth of field, so that the depth of field cannot be further expanded on the basis of main photographing, and the effect of a small aperture is simulated.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a photo taking method, a photo taking apparatus, and a storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided a photograph photographing method applied to a terminal, the terminal supporting portrait mode photographing, the photograph photographing method including:
Controlling a first camera device and a second camera device of the terminal to respectively shoot in response to shooting a picture and adjusting the depth of field of a user in a portrait mode, and determining the adjusted first depth of field, wherein the aperture of the first camera device is larger than that of the second camera device; determining a second depth of field, wherein the second depth of field is the depth of field corresponding to the original image shot by the first camera device and the original image shot by the second camera device; and performing depth of field extension on the second depth of field to obtain a photo conforming to the adjusted first depth of field.
In one embodiment, the performing depth of field extension on the second depth of field to obtain a photograph conforming to the adjusted first depth of field includes:
and responding to the deepening of the adjusted first depth of field relative to the second depth of field, taking the original image shot by the second camera device as a main image, and obtaining a picture conforming to the adjusted first depth of field based on the second depth of field.
In one embodiment, the performing depth of field extension on the second depth of field to obtain a photograph conforming to the adjusted first depth of field includes:
and responding to the fact that the adjusted first depth of field is shallower than the second depth of field, taking the original image shot by the first camera device as a main image, and obtaining a photo conforming to the adjusted first depth of field based on the second depth of field.
In one embodiment, controlling the first image capturing device of the terminal to capture images includes:
a first camera device of the terminal is controlled to respectively shoot a plurality of pictures of short focusing distance, middle focusing distance and long focusing distance; performing depth of field extension on the second depth of field to obtain a photo conforming to the adjusted first depth of field, including:
fusing the plurality of photos to obtain an original image shot by the first camera device after fusion; taking the original image shot by the fused first camera device as a main image, and obtaining a photo conforming to the adjusted first depth of field based on the second depth of field.
In one embodiment, the method further comprises:
respectively storing a second depth of field corresponding to the original image shot by the first camera device and a second depth of field corresponding to the original image shot by the second camera device; the obtaining a photo conforming to the adjusted first depth of field based on the second depth of field includes:
determining shooting distances among all main body objects in the original pictures shot by the first camera based on second depth of field corresponding to the original pictures shot by the first camera, and determining shooting distances among all main body objects in the original pictures shot by the second camera based on second depth of field corresponding to the original pictures shot by the second camera; and determining the shooting distance between each main object in the main graph conforming to the adjusted first depth of field based on the shooting distance between each main object in the original graph shot by the first image pickup device and the original graph shot by the second image pickup device, and obtaining a photo conforming to the adjusted first depth of field.
According to a second aspect of embodiments of the present disclosure, there is provided a photograph photographing apparatus applied to a terminal supporting portrait mode photographing, the photograph photographing apparatus including:
the shooting module is used for responding to a user to shoot a picture in a portrait mode and adjusting the depth of field, controlling a first shooting device and a second shooting device of the terminal to shoot respectively, and determining the adjusted first depth of field, wherein the aperture of the first shooting device is larger than that of the second shooting device; the determining module is used for determining a second depth of field, wherein the second depth of field is the depth of field corresponding to the original image shot by the first camera device and the original image shot by the second camera device; and the depth of field extension module is used for performing depth of field extension on the second depth of field to obtain a photo conforming to the adjusted first depth of field.
In one embodiment, the depth of field extension module is configured to:
and responding to the deepening of the adjusted first depth of field relative to the second depth of field, taking the original image shot by the second camera device as a main image, and obtaining a picture conforming to the adjusted first depth of field based on the second depth of field.
In one embodiment, the depth of field extension module is configured to:
And responding to the fact that the adjusted first depth of field is shallower than the second depth of field, taking the original image shot by the first camera device as a main image, and obtaining a photo conforming to the adjusted first depth of field based on the second depth of field.
In one embodiment, the shooting module is configured to:
a first camera device of the terminal is controlled to respectively shoot a plurality of pictures of short focusing distance, middle focusing distance and long focusing distance; the depth of field extension module is used for:
fusing the plurality of photos to obtain an original image shot by the first camera device after fusion; taking the original image shot by the fused first camera device as a main image, and obtaining a photo conforming to the adjusted first depth of field based on the second depth of field.
In one embodiment, the depth of field extension module is further configured to:
respectively storing a second depth of field corresponding to the original image shot by the first camera device and a second depth of field corresponding to the original image shot by the second camera device; the depth of field extension module is used for:
determining shooting distances among all main body objects in the original pictures shot by the first camera based on second depth of field corresponding to the original pictures shot by the first camera, and determining shooting distances among all main body objects in the original pictures shot by the second camera based on second depth of field corresponding to the original pictures shot by the second camera; and determining the shooting distance between each main object in the main graph conforming to the adjusted first depth of field based on the shooting distance between each main object in the original graph shot by the first image pickup device and the original graph shot by the second image pickup device, and obtaining a photo conforming to the adjusted first depth of field.
According to a third aspect of embodiments of the present disclosure, there is provided a photograph taking apparatus including:
a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the photograph taking method of the first aspect or any implementation manner of the first aspect is performed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, which when executed by a processor of a mobile terminal, enables the mobile terminal to perform the photograph taking method of the first aspect or any one of the implementation manners of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the first camera device and the second camera device of the terminal are controlled to shoot by responding to the shooting of the user in the portrait mode and the adjustment of the depth of field, and the second depth of field of the original pictures shot by the first camera device and the second camera device is expanded, so that the finally adjusted photo with the first depth of field is obtained, and the effect of expanding the depth of field is achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram illustrating a clear imaging range with a depth of field around F1.4 according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating a clear imaging range with a depth of field around F5.6 according to an exemplary embodiment.
Fig. 3 is a schematic diagram illustrating a clear imaging range with a depth of field around F22 according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a method of taking a photograph, according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a depth of field extension implementation of a photo taking method according to an exemplary embodiment.
Fig. 6 is a schematic view of a first image capturing apparatus and a second image capturing apparatus of a photo capturing method according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating an implementation procedure of depth of field extension of a photo taking method according to an exemplary embodiment.
Fig. 8 is a flowchart illustrating a method of taking a photograph, according to an exemplary embodiment.
Fig. 9 is a schematic diagram showing a photograph taking method in close-up according to an exemplary embodiment.
Fig. 10 is a schematic diagram showing a distance in focus for a photo taking method according to an exemplary embodiment.
Fig. 11 is a schematic diagram illustrating a photograph taking method focusing on a long distance according to an exemplary embodiment.
Fig. 12 is a schematic diagram showing a photo taking method after multi-frame depth of field fusion, according to an example embodiment.
Fig. 13 is a flowchart illustrating yet another photograph taking method according to an exemplary embodiment.
Fig. 14 is a block diagram of a photograph taking device, according to an example embodiment.
Fig. 15 is a block diagram illustrating an apparatus for photograph taking, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Along with the rapid development of the intelligent terminal, the functions realized by the intelligent terminal are also continuously optimized. In particular, the fields of cameras and images have been developed as main interfaces for data input, and the requirements for cameras have been increasing. In the camera function of the intelligent terminal, different modes are provided for meeting various photo requirements of users, and shooting requirements of different scenes are correspondingly realized. In the present disclosure, in an android mobile phone camera function, a shooting mode may include a short mode, a slow motion mode, a video recording mode, a shooting mode, a portrait mode, a night scene mode, a square mode, a panoramic mode, a professional mode, a wide angle mode, and other shooting modes.
The user can realize different shooting requirements according to different shooting modes in the camera function of the intelligent terminal. For example, when the user needs to take a short video, the user selects the short video mode, and a normal shot picture can select the shooting mode, and the image mode is used when shooting an image. The shooting requirements of different scenes are finished by selecting different camera function shooting modes, so that the shot images or videos more meet the requirements of users, the experience is stronger, and the method is widely loved by the users.
The size of the depth of field (or depth of field as it can be understood) affects the quality of the photograph when it is taken. The depth of field refers to a range of distances between the front and rear of a subject measured at the front of a camera lens or other imager to obtain a clear image, in other words, a range of distances between the front and rear of a focal point after focusing is completed, which is called depth of field. Fig. 1 to 3 are schematic diagrams showing different depth ranges corresponding to different imaging definition ranges in a photo taking method according to an exemplary embodiment. Wherein the solid lines referred to in fig. 1 to 3 represent clear images and the dashed lines represent blurred images. Fig. 1 is a schematic diagram illustrating a clear imaging range with a depth of field around F1.4 according to an exemplary embodiment. As shown in fig. 1, when the depth of field is about F1.4, the range of clear imaging is 1 bird, and the surrounding view is in a completely blurred state. Fig. 2 is a schematic diagram illustrating a clear imaging range with a depth of field around F5.6 according to an exemplary embodiment. As shown in fig. 2, when the depth of field is about F5.6, the range of clear imaging is 3 birds, and the surrounding view is in a completely blurred state. Fig. 3 is a schematic diagram illustrating a clear imaging range with a depth of field around F22 according to an exemplary embodiment. As shown in fig. 3, when the depth of field is around F22, all birds and their scenes are all clearly visible.
In the related art, an aperture, a lens, and a distance from a focal plane to a subject of a photo taking apparatus are important factors affecting the depth of field. The larger the image aperture (smaller the aperture value f), the shallower the depth of field, and the smaller the aperture (larger the aperture value f), the deeper the depth of field. The longer the focal length of the lens, the shallower the depth of field, and conversely the deeper the depth of field. The closer the subject is, the shallower the depth of field, and the farther the subject is, the deeper the depth of field.
Along with the change of the sensor size and the aperture of the main camera of the intelligent terminal, the depth of field range of the camera can correspondingly change the sensor size and the aperture of the camera. Further, if the sensor size and aperture of the camera are larger, the depth of field of the camera is smaller. In particular, in the portrait mode of the camera function, the equivalent minimum aperture (maximum depth of field) is generally around F6.0 in the related art. About F6.0 can be understood as an imaging effect corresponding to the depth of field range of F5.6 in fig. 2 (the range of imaging sharpness is 3 birds). The imaging clear range shown in fig. 2 can be understood as the maximum depth of field range supported in the related art, and because the imaging clear range is limited by the main photographing depth of field of the minimum aperture (maximum depth of field), the depth of field cannot be further expanded on the basis of the main photographing, and the effect of the small aperture is simulated.
Accordingly, the present disclosure provides a photo taking method of taking a photo by a double camera in a portrait mode (Bokeh). For convenience of description, one image pickup apparatus of the dual image pickup apparatus will be referred to as a first image pickup apparatus, and the other image pickup apparatus will be referred to as a second image pickup apparatus. Wherein the aperture of the first image capturing device is larger than the aperture of the second image capturing device. In an example, the first image capturing device may be a main camera and the second image capturing device may be a sub camera.
In the embodiment of the disclosure, the first image capturing device and the second image capturing device respectively capture images and obtain corresponding original pictures. Information of a second depth of field of the original image captured by the first image capturing device and the original image captured by the second image capturing device is calculated based on the stereoscopic parallax of the first image capturing device and the second image capturing device. And responding to the requirement of the user for depth of field expansion by adjusting the depth of field, and expanding the depth of field of the original image shot by the first camera device and the original image shot by the second camera device to obtain a photo conforming to the adjusted first depth of field, thereby achieving the effect of depth of field expansion and obtaining a depth of field picture conforming to the requirement of the user. For example, by applying the depth-of-field extension method provided by the embodiment of the present disclosure, the range of imaging sharpness after depth-of-field extension may reach F22, for example, as shown in fig. 3, where all birds and scenes thereof are all clearly visible.
The present disclosure will be described below with reference to the accompanying drawings and corresponding embodiments thereof.
Fig. 4 is a flowchart illustrating a method of taking a photograph, according to an exemplary embodiment. As shown in fig. 4, the photograph photographing method is used in a terminal, which supports portrait mode photographing, and includes the following steps.
In step S11, in response to the user taking a photograph in the portrait mode and adjusting the depth of field, the first image pickup device and the second image pickup device of the control terminal take images respectively, and the adjusted first depth of field is determined.
Wherein the aperture of the first image capturing device is larger than the aperture of the second image capturing device.
The aperture of the first image pickup device is larger than that of the second image pickup device, and the depth of field range of the image shot by the first image pickup device is smaller than that of the image shot by the second image pickup device. It should be noted that, for convenience of description, the depth of field adjusted by the user according to the requirement is referred to as the first depth of field.
In the embodiment of the disclosure, the terminal responds to the camera function triggered by the user based on the terminal display interface, further responds to the portrait mode triggered by the user based on the camera interface, and responds to the depth of field effect of the portrait mode. The user takes a picture in the portrait mode, and the depth of field can be adjusted according to the requirements. And the first camera device and the second camera device of the terminal are controlled to shoot a target scene respectively through a portrait mode display interface of the camera function, and the terminal further determines a first depth of field after the user adjusts.
In step S12, a second depth of field is determined.
For convenience of description and distinction, the depth of field of the original image captured by the first image capturing device and the original image captured by the second image capturing device will be referred to as a second depth of field. After the first image capturing device and the second image capturing device of the terminal capture images respectively, a captured original image is obtained, and the terminal further determines second depth of field of the original image captured by the first image capturing device and the second depth of field of the original image captured by the second image capturing device respectively according to the captured original image.
In step S13, the second depth of field is extended to obtain a photograph conforming to the adjusted first depth of field.
In the embodiment of the disclosure, the terminal performs depth of field extension on the original image shot by the first camera device and the second depth of field of the original image shot by the second camera device according to the determined depth of field adjusted by the user, determines a clear imaging range, generates an image, and finally obtains a photo conforming to the depth of field adjusted by the user. Especially when taking the multi-person photo, can realize the clear shooting of multi-person image through adjusting the depth of field, obtain the clear picture of multi-person image.
In the above embodiment, according to the photo shooting method provided by the present disclosure, the first image capturing device and the second image capturing device are used for shooting, and the effect of extending the depth of field is achieved by adjusting the second depth of field of the original pictures of the first image capturing device and the second image capturing device, so that the picture with clear imaging is obtained.
The following embodiment will describe an implementation manner of depth expansion of a second depth of field of an original image captured by a first image capturing device and an original image captured by a second image capturing device with reference to the accompanying drawings.
In the embodiment of the disclosure, in the dual camera mode of the terminal, the sensor sizes and the aperture sizes of the first and second image capturing devices are different, and the sensor size and the aperture of the first image capturing device are generally larger than those of the second image capturing device, i.e. the depth of field of the second image capturing device is larger than that of the first image capturing device. Therefore, the embodiment of the disclosure can apply the original image shot by the first image pickup device and/or the original image shot by the second image pickup device to perform depth of field extension. Fig. 5 is a schematic view showing a depth of field extension process of a photograph photographing method according to an exemplary embodiment. As shown in fig. 5, when shooting is performed by using the terminal double-shot, first, the terminal controls the first image pickup device and the second image pickup device of the double-shot device to perform shooting respectively in response to the portrait mode of the camera function, so as to obtain the original image shot by the first image pickup device and the original image shot by the second image pickup device. Fig. 6 is a schematic diagram of a first image capturing apparatus and a second image capturing apparatus of a photo capturing method according to an exemplary embodiment. As shown in fig. 6, the first image pickup device and the second image pickup device respectively pick up images of the same object (tree and person). In the embodiment of the disclosure, after the first image capturing device and the second image capturing device capture the photos, the original image captured by the first image capturing device and the original image captured by the second image capturing device obtained by capturing the photos by the first image capturing device and the second image capturing device are saved. The depth of field of the original image shot by the first camera device and the second depth of field of the original image shot by the second camera device are calculated by a method for calculating the depth of field in the conventional technology, and the calculated depth of field of the original image shot by the first camera device and the calculated depth of field of the original image shot by the second camera device are stored independently. And finally, determining a required aperture according to the determined adjusted first depth of field, and selecting an original image shot by the first image pickup device or an original image shot by the second image pickup device to carry out a main image after finally expanding the depth of field. According to the embodiment of the disclosure, the original image shot by the first image pickup device can be determined as the main image or the original image shot by the second image pickup device can be determined as the main image according to the preset aperture threshold. For example, the preset aperture threshold may be understood as a maximum aperture that can be supported by hardware such as an aperture of the terminal. In the embodiment of the present disclosure, an example of the preset aperture F6.0 is described. If the required aperture is determined to be larger than F6.0 (the required depth of field is smaller than the depth of field corresponding to the F6.0 aperture) according to the adjusted first depth of field, the original image shot by the first image pickup device is used as a main image. If the required aperture is smaller than F6.0 (the required depth of field is larger than the depth of field corresponding to the F6.0 aperture) according to the adjusted first depth of field, the original image shot by the second image shooting device is used as a main image. And generating a final picture with clear imaging according to the determined main picture and the adjusted first depth information determined by the embodiment.
It should be understood that, in the conventional depth of field calculation method according to the embodiments of the present disclosure, the aperture value used by the lens, the focal length of the lens, the allowable circle of confusion diameter, and the object distance may be determined, and detailed calculation procedures of the embodiments of the present disclosure will not be described in detail.
In an implementation manner of the embodiment of the disclosure, when a photograph with a depth of field range deeper than a traditional depth of field range needs to be obtained, an original image shot by the second image pickup device may be used as a main image to obtain a photograph with a large depth of field. In the embodiment of the disclosure, in response to a user performing depth of field adjustment in a portrait mode of a terminal, the adjusted first depth of field is deeper than a second depth of field of an original image captured by a first image capturing device and an original image captured by a second image capturing device, an original image captured by the second image capturing device is acquired, and the original image captured by the second image capturing device is used as a main image. And obtaining a photo meeting the effect of the large depth of field based on the second depth of field of the original image shot by the first camera and the original image shot by the second camera and the adjusted required first depth of field.
In another implementation manner of the embodiment of the present disclosure, when a photograph with a depth of field range shallower than a conventional depth of field range needs to be obtained, an original image captured by the first image capturing device may be used as a main image to obtain a photograph with a small depth of field. In the embodiment of the disclosure, in response to a user performing depth of field adjustment in a portrait mode of a terminal, and the adjusted depth of field becomes shallower than a second depth of field of an original image captured by a first image capturing device and an original image captured by a second image capturing device, the original image captured by the first image capturing device and the original image captured by the second image capturing device captured by the terminal double-shot are acquired, the original image captured by the first image capturing device is taken as a main image, and depth of field information of the first image capturing device as the original image is determined based on the second depth of field of the original image captured by the first image capturing device and the original image captured by the second image capturing device and the adjusted first depth of field required to obtain a photograph with a small depth of field.
According to the embodiment of the disclosure, through the characteristics of the apertures of the first image pickup device and the second image pickup device, when the depth of field is extended, the original image shot by the first image pickup device or the original image shot by the second image pickup device is selected as the main image according to the size related to the required depth of field, so that the effect of extending the depth of field is realized.
In still another embodiment of the present disclosure, although the aperture size of the first image capturing device cannot be adjusted, the first image capturing device may capture original pictures with different focusing distances, so that a photo with extended depth of field may be obtained by performing multi-frame depth of field fusion according to the captured original pictures with different focusing distances of the first image capturing device. Therefore, in one embodiment of the present disclosure, a plurality of different photographs with different focusing distances are taken by using a small aperture of the first image capturing device, and multiple frames of depth of field fusion is performed on the plurality of different photographs with different focusing distances, so as to further realize the expansion of a large depth of field.
Fig. 7 is a schematic diagram illustrating an implementation procedure of depth of field extension of a photo taking method according to an exemplary embodiment. As shown in fig. 7, when shooting is performed by using the terminal, first, the terminal controls the first image capturing device and the second image capturing device of the dual image capturing device to perform shooting respectively in response to the portrait mode of the camera function, and obtains the original image captured by the first image capturing device and the original image captured by the second image capturing device. The original image captured by the first imaging device is an original image of a subject object at a short distance in focus, an original image of a subject object at a medium distance in focus, and an original image of a subject object at a long distance in focus, respectively. And carrying out multi-frame depth of field fusion on the original image of the main object, the original image of the middle distance of focusing on the main object and the original image of the long distance of focusing on the main object, which are obtained through the first image pickup device, so as to obtain the original image shot by the first image pickup device and the original image shot by the second image pickup device. After the obtained original image shot by the first image pickup device and the original image shot by the second image pickup device are stored, the depth of field is calculated according to the original image shot by the first image pickup device, the original image shot by the second image pickup device and the determined adjusted first depth of field, and the calculated depth of field is independently stored in a corresponding file. And obtaining the depth of field required by the user according to the obtained original image shot by the first image pickup device, the original image shot by the second image pickup device and the depth of field calculated and obtained by independently storing the original image and the original image in the corresponding file.
Further, fig. 8 is a flowchart illustrating a photo taking method according to an exemplary embodiment. As shown in fig. 8, the depth of field of the original image captured by the first image capturing device and the second depth of field of the original image captured by the second image capturing device are extended to obtain a photograph conforming to the adjusted depth of field, which includes steps S21-S22.
In step S21, a plurality of photographs are fused to obtain an original image captured by the first image capturing device after the fusion.
In the embodiment of the disclosure, a subject object is subjected to near-focus shooting, far-focus shooting and medium-focus shooting by using a first image pickup device of terminal double-shot. And carrying out multi-frame fusion according to the shot focus close-range shooting original image, the focusing remote-range shooting original image and the focusing middle-range shooting original image, and selecting a part with clear imaging of the main object to carry out fusion recombination to obtain a picture with clear imaging of the main object. And taking the obtained picture with clear imaging as an original picture shot by the first shooting device.
In step S22, the original image captured by the fused first image capturing device is used as a main image, and a photograph conforming to the adjusted first depth of field is obtained based on the second depth of field.
In the embodiment of the disclosure, an original image shot by a first image pickup device after multi-frame fusion is acquired, and the acquired original image shot by the first image pickup device is taken as a final imaging main image. And determining a clear imaging picture range according to the determined adjusted depth of field and the second depth of field of the original image shot by the first camera and the original image shot by the second camera, and finally obtaining a first depth of field photo meeting the requirements.
According to the photo shooting method, the first camera devices with different focusing distances are used for shooting multiple frames of photos, and the original pictures shot by the first camera devices with multiple frames of depth of field fusion are used as the main picture, so that the requirement of large depth of field is further met, and the effect of depth of field expansion is also achieved.
The embodiment of the disclosure describes a process of fusing different focusing distances of a first camera device to obtain depth of field extension by taking a main object as a child, a adult and a plant as examples. Wherein the different focusing distances comprise a focusing close distance, a focusing middle distance and a focusing long distance.
Fig. 9 is a schematic diagram showing a photograph taking method in close-up according to an exemplary embodiment. As shown in fig. 9, when a subject object is photographed using a close-up focusing distance, since a child is closest to a camera, a clear image is formed as a child. For other subject objects, such as adults at a medium distance and plants at a longer distance, the adults and plants farther from the photographed picture are in blurred imaging due to the use of a close-in-focus distance.
Fig. 10 is a schematic diagram showing a distance in focus for a photo taking method according to an exemplary embodiment. As shown in fig. 10, when a subject object is photographed using an in-focus distance, a clear image is imaged as a person at an intermediate distance. For other subject objects, such as close children and distant plants, the imaging of the close children and the plants further away in the picture is blurred due to the use of in-focus distances.
Fig. 11 is a schematic diagram illustrating a photograph taking method focusing on a long distance according to an exemplary embodiment. As shown in fig. 11, when a subject object is photographed at a long distance using focusing, a clear image is imaged as a plant at a long distance. For other subject objects, such as small children and middle-distance adults, the small children and middle-distance adults in the imaged picture are in blurred imaging due to the use of focusing distance.
Fig. 12 is a schematic diagram showing a photo taking method after multi-frame depth of field fusion, according to an example embodiment. As shown in fig. 12, for the in-focus close-range shot picture, the in-focus distance shot picture, and the in-focus distance shot picture obtained in the above embodiment, a subject image with clear imaging in each picture is selected, multi-frame depth fusion is performed, a fused image with all clear imaging of the subject is obtained, and the obtained fused image with clear imaging is taken as an original image shot by the first image pickup device.
The implementation process of performing depth of field expansion on the original image shot by the first image pickup device and the original image shot by the second image pickup device to obtain the photograph conforming to the adjusted first depth of field according to the embodiment can be implemented by adopting the method flow shown in fig. 13.
Fig. 13 is a flowchart illustrating a method of photographing a photograph according to an exemplary embodiment. As shown in fig. 13, a photograph conforming to the adjusted first depth of field is obtained based on the original image captured by the first image capturing device and the second depth of field of the original image captured by the second image capturing device, including steps S31 to S32.
In step S31, the shooting distance between the subject objects in the original image shot by the first image pickup device is determined based on the second depth of field corresponding to the original image shot by the first image pickup device, and the shooting distance between the subject objects in the original image shot by the second image pickup device is determined based on the second depth of field corresponding to the original image shot by the second image pickup device.
In the embodiment of the disclosure, the original image captured by the first image capturing device and the original image captured by the second image capturing device need to be stored first, where the storage positions may be stored in corresponding files, or the corresponding files may be established while the terminal captures the subject object, and the original image captured by the first image capturing device and the original image captured by the second image capturing device are stored in the corresponding positions.
In one embodiment, after the original image captured by the first image capturing device and the original image captured by the second image capturing device are stored, the depth of field of the original image captured by the first image capturing device and the second depth of field of the original image captured by the second image capturing device are determined based on the stored original image captured by the first image capturing device and the original image captured by the second image capturing device, and the capturing distance between the single subject object in the original image captured by the first image capturing device and the original image captured by the second image capturing device is further determined, so that the subject object belongs to a subject object at a close distance in focus, a subject object at a distance in focus, and a subject object at a far distance in focus are determined.
In step S32, based on the shooting distances between the subject objects in the original image shot by the first imaging device and the original image shot by the second imaging device, the shooting distances between the subject objects in the main image conforming to the adjusted first depth of field are determined, and a photograph conforming to the adjusted first depth of field is obtained.
In an embodiment of the disclosure, based on the close-focusing subject object determined in the above embodiment, the in-focus distance subject object and the far-focusing subject object, a shooting distance between the subject objects in the main map conforming to the adjusted depth of field is determined, and the shooting distance between the subject objects is adjusted, so as to obtain a photograph conforming to the adjusted first depth of field.
According to the photo shooting method, the first camera device and the second camera device of the terminal are controlled to shoot through responding to the shooting of the photo and the adjustment of the depth of field of the user in the portrait mode, the depth of field of the original pictures shot by the first camera device and the second camera device is expanded, the photo which finally accords with the adjusted first depth of field is obtained, and the effect of expanding the depth of field is achieved.
Based on the same conception, the embodiment of the disclosure also provides a photo shooting device.
It can be appreciated that, in order to implement the above-mentioned functions, the photo taking device provided in the embodiments of the present disclosure includes a hardware structure and/or a software module that perform respective functions. The disclosed embodiments may be implemented in hardware or a combination of hardware and computer software, in combination with the various example elements and algorithm steps disclosed in the embodiments of the disclosure. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure.
Fig. 14 is a block diagram of a photograph taking device 100, according to an example embodiment. Referring to fig. 14, the apparatus includes a photographing module 101, a determining module 102, and a depth of field extending module 103.
And the shooting module 101 is used for responding to the shooting of the picture of the user in the portrait mode and adjusting the depth of field, controlling the first shooting device and the second shooting device of the terminal to respectively shoot, and determining the adjusted first depth of field, wherein the aperture of the first shooting device is larger than that of the second shooting device. The determining module 102 is configured to determine a second depth of field, where the second depth of field is a depth of field corresponding to the original image captured by the first image capturing device and the original image captured by the second image capturing device. And the depth-of-field extension module 103 is configured to perform depth-of-field extension on the second depth of field to obtain a photograph conforming to the adjusted first depth of field.
In the embodiment of the present disclosure, the depth-of-field extension module 103 is configured to, in response to the adjusted first depth-of-field becoming deeper than the second depth-of-field, take the original image captured by the second image capturing device as the main image, and obtain a photograph conforming to the adjusted first depth-of-field based on the second depth-of-field.
In the embodiment of the present disclosure, the depth-of-field extension module 103 is configured to, in response to the adjusted first depth-of-field becoming shallower than the second depth-of-field, take the original image captured by the first image capturing device as the main image, and obtain a photograph conforming to the adjusted first depth-of-field based on the second depth-of-field.
In the embodiment of the present disclosure, the photographing module 101 is configured to photograph a plurality of photographs of a close-in-focus distance, an in-focus distance, and a far-in-focus distance, respectively, by using a first photographing device of a control terminal. The depth of field extension module 103 is configured to fuse a plurality of photos to obtain an original image captured by the first camera after fusion. Taking the original image shot by the fused first camera device as a main image, and obtaining a photo conforming to the adjusted first depth of field based on the second depth of field.
In the embodiment of the present disclosure, the depth-of-field extension module 103 is further configured to store a second depth of field corresponding to the original image captured by the first image capturing device and a second depth of field corresponding to the original image captured by the second image capturing device, respectively. The depth-of-field extension module 103 is configured to determine a shooting distance between each subject object in the original image shot by the first image pickup device based on a second depth of field corresponding to the original image shot by the first image pickup device, and determine a shooting distance between each subject object in the original image shot by the second image pickup device based on a second depth of field corresponding to the original image shot by the second image pickup device. And determining the shooting distance between the main objects in the main graph conforming to the adjusted first depth of field based on the shooting distance between the main objects in the original graph shot by the first image pickup device and the original graph shot by the second image pickup device, and obtaining a photo conforming to the adjusted first depth of field.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 15 is a block diagram illustrating an apparatus 1500 for photograph taking, according to an example embodiment. For example, apparatus 1500 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, or the like.
Referring to fig. 15, apparatus 1500 may include one or more of the following components: a processing component 1502, a memory 1504, a power component 1506, a multimedia component 1508, an audio component 1510, an input/output (I/O) interface 1512, a sensor component 1514, and a communication component 1516.
The processing component 1502 generally controls overall operation of the apparatus 1500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1502 may include one or more processors 1520 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1502 may include one or more modules that facilitate interactions between the processing component 1502 and other components. For example, the processing component 1502 may include a multimedia module to facilitate interaction between the multimedia component 1508 and the processing component 1502.
The memory 1504 is configured to store various types of data to support operations at the apparatus 1500. Examples of such data include instructions for any application or method operating on the apparatus 1500, contact data, phonebook data, messages, pictures, videos, and the like. The memory 1504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 1506 provides power to the various components of the device 1500. The power components 1506 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 1500.
The multimedia component 1508 comprises a screen between the device 1500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, multimedia assembly 1508 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1500 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 1510 is configured to output and/or input audio signals. For example, the audio component 1510 includes a Microphone (MIC) configured to receive external audio signals when the device 1500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 1504 or transmitted via the communication component 1516. In some embodiments, the audio component 1510 further comprises a speaker for outputting audio signals.
The I/O interface 1512 provides an interface between the processing component 1502 and peripheral interface modules, which can be keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1514 includes one or more sensors for providing status assessment of various aspects of the apparatus 1500. For example, the sensor assembly 1514 may detect an on/off state of the device 1500, a relative positioning of the components, such as a display and keypad of the device 1500, the sensor assembly 1514 may also detect a change in position of the device 1500 or one component of the device 1500, the presence or absence of user contact with the device 1500, an orientation or acceleration/deceleration of the device 1500, and a change in temperature of the device 1500. The sensor assembly 1514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 1514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1516 is configured to facilitate communication between the apparatus 1500 and other devices in a wired or wireless manner. The apparatus 1500 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 1516 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 1516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided, such as memory 1504, including instructions executable by processor 1520 of apparatus 1500 to perform the above-described methods. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It is understood that the term "plurality" in the embodiments of the present disclosure means two or more, and other adjectives are similar thereto. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is further understood that the terms "first," "second," and the like are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that "connected" includes both direct connection where no other member is present and indirect connection where other element is present, unless specifically stated otherwise.
It will be further understood that although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (4)

1. A photograph photographing method, which is applied to a terminal, the terminal supporting portrait mode photographing, the photograph photographing method comprising:
controlling a first camera device and a second camera device of the terminal to respectively shoot in response to shooting a picture and adjusting the depth of field of a user in a portrait mode, and determining the adjusted first depth of field, wherein the aperture of the first camera device is larger than that of the second camera device;
respectively determining a second depth of field corresponding to the original image shot by the first camera device and a second depth of field corresponding to the original image shot by the second camera device;
responding to the fact that the adjusted first depth of field is deeper than a second depth of field corresponding to the original image shot by the first image pickup device and a second depth of field corresponding to the original image shot by the second image pickup device, and taking the original image shot by the second image pickup device as a main image; responding to the fact that the adjusted first depth of field is shallower than a second depth of field corresponding to the original image shot by the first image pickup device and a second depth of field corresponding to the original image shot by the second image pickup device, and taking the original image shot by the first image pickup device as a main image; or alternatively, the first and second heat exchangers may be,
a first camera device of the terminal is controlled to respectively shoot a plurality of pictures of short focusing distance, middle focusing distance and long focusing distance; fusing the plurality of photos to obtain an original image shot by the first camera device after fusion; taking the original image shot by the fused first camera device as a main image;
Performing depth of field expansion based on a second depth of field corresponding to the original image shot by the first camera and a second depth of field corresponding to the original image shot by the second camera, and obtaining a photo conforming to the adjusted first depth of field;
the performing depth of field expansion based on the second depth of field corresponding to the original image shot by the first image pickup device and the second depth of field corresponding to the original image shot by the second image pickup device to obtain a photo conforming to the adjusted first depth of field includes:
determining shooting distances among all main body objects in the original pictures shot by the first camera based on second depth of field corresponding to the original pictures shot by the first camera, and determining shooting distances among all main body objects in the original pictures shot by the second camera based on second depth of field corresponding to the original pictures shot by the second camera;
and determining the shooting distance between each main object in the main graph conforming to the adjusted first depth of field based on the shooting distance between each main object in the original graph shot by the first image pickup device and the original graph shot by the second image pickup device, and obtaining a photo conforming to the adjusted first depth of field.
2. A photograph photographing apparatus, which is applied to a terminal that supports portrait mode photographing, comprising:
the shooting module is used for responding to a user to shoot a picture in a portrait mode and adjusting the depth of field, controlling a first shooting device and a second shooting device of the terminal to shoot respectively, and determining the adjusted first depth of field, wherein the aperture of the first shooting device is larger than that of the second shooting device;
the determining module is used for respectively determining a second depth of field corresponding to the original image shot by the first camera device and a second depth of field corresponding to the original image shot by the second camera device;
the depth of field extension module is used for responding to the second depth of field corresponding to the original image shot by the first camera device and the second depth of field corresponding to the original image shot by the second camera device after the adjusted first depth of field is deepened, and taking the original image shot by the second camera device as a main image; responding to the fact that the adjusted first depth of field is shallower than a second depth of field corresponding to the original image shot by the first image pickup device and a second depth of field corresponding to the original image shot by the second image pickup device, and taking the original image shot by the first image pickup device as a main image; performing depth of field expansion based on a second depth of field corresponding to the original image shot by the first camera and a second depth of field corresponding to the original image shot by the second camera, and obtaining a photo conforming to the adjusted first depth of field;
Or the shooting module is also used for controlling the first shooting device of the terminal to respectively shoot a plurality of photos of short focusing distance, middle focusing distance and long focusing distance; the depth of field extension module is used for fusing the plurality of pictures to obtain an original image shot by the first camera after fusion; taking the original image shot by the fused first camera device as a main image;
the depth of field extension module is further configured to: determining shooting distances among all main body objects in the original pictures shot by the first camera based on second depth of field corresponding to the original pictures shot by the first camera, and determining shooting distances among all main body objects in the original pictures shot by the second camera based on second depth of field corresponding to the original pictures shot by the second camera; and determining the shooting distance between each main object in the main graph conforming to the adjusted first depth of field based on the shooting distance between each main object in the original graph shot by the first image pickup device and the original graph shot by the second image pickup device, and obtaining a photo conforming to the adjusted first depth of field.
3. A photograph taking apparatus, comprising:
a processor;
A memory for storing a computer program for execution by the processor;
wherein the processor is configured to: a photograph taking method as defined in claim 1.
4. A non-transitory computer readable storage medium, which when executed by a processor of a mobile terminal, enables the mobile terminal to perform the photograph taking method of claim 1.
CN202010600401.6A 2020-06-28 2020-06-28 Photo taking method, photo taking device and storage medium Active CN113852752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010600401.6A CN113852752B (en) 2020-06-28 2020-06-28 Photo taking method, photo taking device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010600401.6A CN113852752B (en) 2020-06-28 2020-06-28 Photo taking method, photo taking device and storage medium

Publications (2)

Publication Number Publication Date
CN113852752A CN113852752A (en) 2021-12-28
CN113852752B true CN113852752B (en) 2024-03-12

Family

ID=78972723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010600401.6A Active CN113852752B (en) 2020-06-28 2020-06-28 Photo taking method, photo taking device and storage medium

Country Status (1)

Country Link
CN (1) CN113852752B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022227A (en) * 2017-12-29 2018-05-11 努比亚技术有限公司 A kind of black and white background photo acquisition methods, device and computer-readable recording medium
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
JP2018189666A (en) * 2017-04-28 2018-11-29 キヤノン株式会社 Imaging device control method
CN110324532A (en) * 2019-07-05 2019-10-11 Oppo广东移动通信有限公司 A kind of image weakening method, device, storage medium and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018189666A (en) * 2017-04-28 2018-11-29 キヤノン株式会社 Imaging device control method
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
CN108022227A (en) * 2017-12-29 2018-05-11 努比亚技术有限公司 A kind of black and white background photo acquisition methods, device and computer-readable recording medium
CN110324532A (en) * 2019-07-05 2019-10-11 Oppo广东移动通信有限公司 A kind of image weakening method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113852752A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN108419016B (en) Shooting method and device and terminal
KR101952684B1 (en) Mobile terminal and controlling method therof, and recording medium thereof
CN110493526B (en) Image processing method, device, equipment and medium based on multiple camera modules
CN108154465B (en) Image processing method and device
CN106210496B (en) Photo shooting method and device
CN108462833B (en) Photographing method, photographing device and computer-readable storage medium
CN108154466B (en) Image processing method and device
WO2022089284A1 (en) Photographing processing method and apparatus, electronic device, and readable storage medium
CN111586296B (en) Image capturing method, image capturing apparatus, and storage medium
CN111461950B (en) Image processing method and device
CN113852752B (en) Photo taking method, photo taking device and storage medium
CN111343375A (en) Image signal processing method and device, electronic device and storage medium
CN114339019B (en) Focusing method, focusing device and storage medium
CN114339022B (en) Camera shooting parameter determining method and neural network model training method
CN112866555B (en) Shooting method, shooting device, shooting equipment and storage medium
CN114244999A (en) Automatic focusing method and device, camera equipment and storage medium
CN107707819B (en) Image shooting method, device and storage medium
CN114697517A (en) Video processing method and device, terminal equipment and storage medium
CN115250320B (en) Image acquisition method and device, electronic equipment and storage medium
WO2019061051A1 (en) Photographing method and photographing device
CN114339017B (en) Distant view focusing method, device and storage medium
CN114079724B (en) Taking-off snapshot method, device and storage medium
CN106713748B (en) Method and device for sending pictures
CN116489507A (en) Focusing method, focusing device, electronic equipment and storage medium
CN116980754A (en) Control method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant