CN114025100B - Shooting method, shooting device, electronic equipment and readable storage medium - Google Patents

Shooting method, shooting device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114025100B
CN114025100B CN202111449935.4A CN202111449935A CN114025100B CN 114025100 B CN114025100 B CN 114025100B CN 202111449935 A CN202111449935 A CN 202111449935A CN 114025100 B CN114025100 B CN 114025100B
Authority
CN
China
Prior art keywords
image
target
shooting
blurring
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111449935.4A
Other languages
Chinese (zh)
Other versions
CN114025100A (en
Inventor
胡孔明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111449935.4A priority Critical patent/CN114025100B/en
Publication of CN114025100A publication Critical patent/CN114025100A/en
Application granted granted Critical
Publication of CN114025100B publication Critical patent/CN114025100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, electronic equipment and a readable storage medium. Belongs to the field of camera shooting. An embodiment of the method comprises: controlling a camera to shoot N groups of images of N shooting objects, wherein each group of images comprises an object image and a background image; receiving a first input of a user; in response to the first input, performing image synthesis on the target object image and the target background image to generate a target blurring image; wherein the target object image and the target background image are determined from a first input, N being a positive integer. The implementation reduces the cost of blurring shooting and enriches the effect of image blurring.

Description

Shooting method, shooting device, electronic equipment and readable storage medium
Technical Field
The embodiment of the application relates to the field of image capturing, in particular to a shooting method, a shooting device, electronic equipment and a readable storage medium.
Background
Blurring is an image processing method in which depth of field is made shallow by a digital image processing technique, focus is made to be concentrated on a subject, and an image out of a focal plane is gradually blurred. Compared with the common image, the blurring image can better highlight the shooting object.
In the related art, a certain shooting object can be focused in the shooting process, the depth of field is obtained through the double-camera module, and then the blurring image with clear shooting object and blurred rest part is obtained based on the depth of field.
Disclosure of Invention
An object of the embodiment of the application is to provide a shooting method, a shooting device, an electronic device and a readable storage medium, which can solve the technical problems of higher blurring shooting cost and single blurring effect.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes: controlling a camera to shoot N groups of images of N shooting objects, wherein each group of images comprises an object image and a background image; receiving a first input of a user; in response to the first input, performing image synthesis on the target object image and the target background image to generate a target virtual image; the target object image and the target background image are determined according to the first input, and N is a positive integer.
In a second aspect, an embodiment of the present application provides a photographing apparatus, including: the control unit is used for controlling the camera to shoot N groups of images of N shooting objects, and each group of images comprises an object image and a background image; a first receiving unit for receiving a first input of a user; a generating unit, configured to perform image synthesis on a target object image and a target background image in response to the first input, and generate a target virtual image; the target object image and the target background image are determined according to the first input, and N is a positive integer.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the method as described in the first aspect above when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as described in the first aspect above.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor and a communication interface, the communication interface being coupled to the processor, the processor being configured to execute programs or instructions to implement the method as described in the first aspect.
In the embodiment of the application, the camera is controlled to shoot N groups of images of N shooting objects, each group of images comprises an object image and a background image, and then a first input of a user is received, so that the target object image and the target background image are subjected to image synthesis in response to the first input, and a target virtual image is generated. Therefore, on one hand, under the condition that the camera is not increased, the continuous zooming of the single camera is controlled to shoot a plurality of images, and the generation of the blurring image is realized by combining the interactive operation of a user, so that the blurring shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by a focusing plane, the target blurring image meeting the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
fig. 1 is one of flowcharts of a photographing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a labeling process of a photographic subject of the photographing method provided in the embodiment of the application;
Fig. 3 is one of schematic diagrams of a shooting result display interface of a shooting method provided in an embodiment of the present application;
fig. 4 is a schematic process diagram of receiving a first input in a photographing method according to an embodiment of the present application;
FIG. 5 is a second schematic diagram of a display interface of a shooting result of the shooting method according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device suitable for use in implementing embodiments of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The shooting method, the shooting device, the electronic equipment and the readable storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
Referring to fig. 1, one of flowcharts of a photographing method provided in an embodiment of the present application is shown. The shooting method provided by the embodiment of the application can be applied to electronic equipment. In practice, the electronic device may be an electronic device such as a smart phone, a tablet computer, a laptop portable computer, and the like. The electronic device may have a camera application installed therein, and the camera application may have a photographing function.
The shooting method provided by the embodiment of the application comprises the following steps:
step 101, controlling a camera to shoot N groups of images of N shooting objects, wherein each group of images comprises an object image and a background image.
In the present embodiment, an execution subject of the video photographing method (e.g., the electronic apparatus described above) may be mounted with a camera. The execution body may control the camera to capture N groups of images of N capturing objects. Wherein N is a positive integer. The N shot objects may be any N objects in the video preview interface, such as people, animals, stills, scenes, etc. As an example, if a child, a woman, and a tree are displayed in the photographing preview interface, at least one of the child, the woman, and the tree may be a photographing object. For each subject, when the camera is controlled to shoot the shooting subject, shooting after focusing can be performed on the shooting subject first.
In the present embodiment, N groups of images can be obtained by photographing N photographing subjects. Each set of images may include an object image and a background image. The object image may be an image area surrounded by an object contour of the photographing object, and the background image may be an image of an area other than the image area surrounded by the object contour in the photographed image. The object image may be an image including the image region, and the background image may be an image including a region other than the image region, for example, the object image and the background image may each include the entire image content in the video preview interface.
In some optional implementations of this embodiment, before controlling the camera to capture N groups of images of N capturing objects, the executing body may further receive a second input from the user to N preview areas on the capturing preview interface; in response to the above-described second input, an object in each of the N preview areas may be determined as a photographic object. The second input may include, but is not limited to, a sequential click input, a sequential long press input, a sequential circle selection input, a sequential box selection input, etc. for each preview area of N. Thus, the user can flexibly select the shooting object in the shooting preview interface according to the requirement.
In some optional implementations of this embodiment, after determining the object in the preview area as the shooting object, the executing entity may further mark the shooting object in the video preview interface. The marking means may include, but is not limited to, at least one of: a mark frame is displayed on the subject, a logo is displayed beside the subject, a style of a preview area where the subject is located is changed, and the like. As an example, fig. 2 shows a schematic diagram of a marking process of a photographic subject. The shooting preview interface comprises three shooting objects, namely a female near, a child in a middle distance, and a tree and a bench far away. The user may first click on a nearby female, and a number 1 appears next to the corresponding female outline, identifying it as the first subject. The user can continue to click on a child at a medium distance, where a number 2 appears next to the child's outline, indicating that it is the second subject. Therefore, when a user selects one shooting object, a corresponding selection result can be displayed, so that the user can conveniently distinguish the selected shooting object from the unselected shooting object, the user is prevented from repeatedly selecting or cancelling the same shooting object by mistake, and the accuracy of the user operation is improved.
It should be noted that, in this process, if the user wants to cancel a certain shooting object, the user may click on the main body again, and then click on the "change setting" control to cancel the selection of the shooting object. When the user "completes setting" the control, the process of selecting the photographic subject may end.
In some optional implementations of this embodiment, after the camera is controlled to capture N sets of images of N photographic subjects, N subject images may be displayed in the first region and N background images may be displayed in the second region. The first area and the second area may be any two areas in the display interface. As an example, three photographic subjects, respectively a near female, a middle-distance child, and a far tree, are included in the photographic preview interface. After controlling the camera to take three sets of images of the three photographing subjects, the photographing result display interface may be as shown in fig. 3. The upper area of the shooting result display interface is a first area, and can display an object image of a close-up woman (as shown by a reference numeral 301), an object image of a distant view tree (as shown by a reference numeral 302), and an object image of a middle-view child (as shown by a reference numeral 303). The lower region of the photographing result display interface is a second region, which may display a background image of a near female (as indicated by reference numeral 304), a background image of a distant tree (as indicated by reference numeral 305), and a background image of a middle-sized child (as indicated by reference numeral 306). By displaying the object image and the background image in the divided areas, the user can conveniently distinguish the object image and the background image, and convenience is provided for the user to select the target object image and the target background image in a candidate manner.
In some optional implementations of this embodiment, when controlling the camera to capture N groups of images of N capturing objects, focusing and capturing may be performed on each capturing object first, so as to obtain a first intermediate image. Then, an image of a first area in the first intermediate image may be scratched to obtain an object image, where the first area may include an area surrounded by an object contour of the photographing object. Finally, a background image of each photographic subject may be obtained based on the first intermediate image or the second intermediate image. The second intermediate image may be obtained by focusing and photographing a second area, and the second area may be an image area of the first intermediate image other than the first area. The object image and the background image obtained by the method can clearly display different shooting objects and blurring effects of different degrees, and provide convenience for a user to distinguish and select shooting objects needing clear display and blurring effects. As an example, as shown in fig. 2, three photographic subjects, respectively, a close-up woman, a middle-view child, and a distant-view tree, are included in the photographic preview interface. The near-field woman may be first focused and photographed to obtain a first intermediate image of the near-field woman. Then, the region where the near-field woman is located in the first intermediate image may be taken as the first region, so as to obtain the object image (as shown by reference numeral 301 in fig. 3) and the background image (as shown by reference numeral 304 in fig. 3) of the near-field woman. And then, focusing and shooting the long-range trees to obtain a first intermediate image of the long-range trees. Then, the area where the tree in the first intermediate image is located may be taken as the first area, so as to obtain an object image (as indicated by reference numeral 304 in fig. 3) of the distant view tree and a background image (as indicated by reference numeral 305 in fig. 3). Then, focusing and shooting can be carried out on the middle-view child, and a first middle image of the middle-view child is obtained. Then, the region where the child is located in the first intermediate image may be taken as the first region, so as to obtain an object image (shown as reference numeral 303 in fig. 3) of the middle-view child and a background image (shown as reference numeral 306 in fig. 3). By displaying the object image and the background image in the divided areas, the user can conveniently distinguish the object image and the background image, and convenience is provided for the user to select the target object image and the target background image in a candidate manner.
In some optional implementations of this embodiment, when the background image of each photographic object is acquired, the background image may be obtained by matting out the image of the second area in the first intermediate image. For example, as shown in fig. 2, three photographic subjects, respectively, a close-up woman, a middle-view child, and a distant tree are included in the photographic preview interface. For the first intermediate image of the near-field woman, the first region may be a region where the near-field woman is located, and the second region may be an image region other than the near-field woman. The background image can be obtained by buckling the image area outside the close-range female. For middle-view children and long-view trees, the same principle is not repeated here. Therefore, for different shooting objects, background images with different blurring degrees can be obtained, and a user can flexibly select the background images with the needed blurring degrees.
In some optional implementations of this embodiment, when obtaining the background image of each shooting object, focusing and shooting may also be performed on the second area first to obtain a second intermediate image; then, the image of the second area in the second intermediate image can be scratched to obtain a background image. For example, as shown in fig. 2, three photographic subjects, respectively, a close-up woman, a middle-view child, and a distant tree are included in the photographic preview interface. For the first intermediate image of the near-field woman, the first region may be a region where the near-field woman is located, and the second region may be an image region other than the near-field woman. The second intermediate image may be obtained by focusing on areas of the image other than the near women, such as on the far. And then buckling an image area outside the close-range women in the second intermediate image to obtain a background image. For middle-view children and long-view trees, the same principle is not repeated here. Thus, a clear background image can be obtained. On the basis, the blurring degree of the background image can be further adjusted, so that a user can obtain clear background images and background images with different blurring degrees, and blurring effects are further enriched.
Step 102, a first input of a user is received.
In this embodiment, the execution body may receive a first input from a user. Wherein the first input is operable to select a target object image and a target background image from the N sets of images. The first input may include an input of an object image and an input of a background image of the N sets of images. The input may include, but is not limited to, a click input, a long press input, a circle selection input, a box selection input, and the like. The execution body may set a certain object image as the target object image after detecting an input of the user to the object image. Similarly, the execution body may set the target image as the target background image after detecting the input of the user to a certain background image.
And step 103, in response to the first input, performing image synthesis on the target object image and the target background image to generate a target virtual image.
In this embodiment, in response to the first input, the execution subject may perform image synthesis on the target object image and the target background image, to generate the target virtual image. In the target blurring image, the target object image selected by the user can be clearly displayed, and other areas can be displayed in a blurring effect. When the target object image and the target background image are combined, an image region surrounded by the object outline of the photographic subject may be first clipped from the target object image. And then, aligning the buckled area with an area surrounded by the object outline of the corresponding shooting object in the background image. And finally, covering the buckled area in the aligned area, thereby obtaining the target blurring image.
In some optional implementations of this embodiment, the execution subject may receive user input of at least one of the N object images and at least one of the N background images. In response to the input, the at least one object image may first be determined as a target object image and the at least one background image may be determined as a target background image.
As an example, in one shot scene, a close-up woman, a middle-view child, and a distant-view tree are included. After the camera is controlled to shoot three groups of images of the three shooting objects, an object image of a near-scene female, an object image of a distant-scene tree, an object image of a middle-scene child, a background image of the near-scene female, a background image of the distant-scene tree and a background image of the middle-scene child can be obtained. If the user needs to obtain images that can be clearly displayed by both a woman in the close-range and a child in the middle-range and the rest of the images are displayed in a virtual manner, as shown in fig. 4, the user can click on two photos, namely, an object image of the close-range woman (shown as a reference numeral 401) and an object image of the child in the middle-range (shown as a reference numeral 402), and at the moment, the edge of the picture becomes dark, and then the user indicates selection, as shown in fig. 4.
Next, a photograph of background blurring may be selected. The degree of background blurring is also different due to the different focus positions of the subject. According to the optical principle, the closer the lens is to the shot object, the higher the background blurring degree is. Therefore, if the user wishes to have a high degree of blurring of the distant view, a background image of a near-view female can be selected. If the user wishes to have a lower background blurring or does not blurring, a background image of the distant scenery tree can be selected. Here, the user desires the background to be moderately blurred, and a background image of a child with a medium scene may be selected (as shown by reference numeral 403). On the basis, the execution subject can synthesize the images selected by the user to obtain the target blurring image with the blurring effect required by the user. In the target blurring image, near women and middle-range children can be clearly displayed, and other areas can be subjected to blurring display according to the middle blurring degree. Therefore, the user can freely select the shooting object which is wanted to keep clear and the area which is wanted to be virtual, the selected shooting object is not needed to be in the same focusing plane, and the effect of image blurring is enriched.
As yet another example, for the same shooting scene, if the user needs an image that is clearly displayed by a woman in close-up and is displayed with the rest of the area being blurred, the object image of the close-up woman may be clicked. Next, a photograph of background blurring may be selected. If the user wishes to have a high degree of blurring of distant views, a background image of a near-view female can be selected. If the user wishes to have a lower background blurring or does not blurring, a background image of the distant scenery tree can be selected. If the user wishes the background to be moderately blurred, a background image of a child with a medium scene may be selected. In addition, the user can select two or more background images. For example, if the user selects a background image of a close-range female and a background image of a medium-range child, the degree of blurring may be between a high degree of blurring and a medium degree of blurring, i.e. at a medium-high degree of blurring. On the basis, the execution subject can synthesize the images selected by the user to obtain the target blurring image with the blurring effect required by the user. In the target blurring image, a close-range female can clearly display, and other areas can be subjected to blurring display according to the middle-high blurring degree. Therefore, the user can freely select the shooting object which is wanted to keep clear and the area which is wanted to be virtual, the selected shooting object is not needed to be in the same focusing plane, and the effect of image blurring is enriched. In some optional implementations of this embodiment, the receiving the first input from the user may further include receiving an input from the user to a target background image of the N background images, where the input may be used to set a target blurring degree of the target background image. The input may include, but is not limited to, a click input, a double click input, a long press input, a swipe input, a custom gesture input, and the like. Thus, according to the first input, the execution subject can also determine the target blurring degree of the target background image. In the image synthesis, the blurring process may be performed on the target background image according to the target blurring degree. Then, the target object image and the target background image after blurring processing can be subjected to image synthesis to generate a target blurring image.
As an example, the user takes a close-up woman as a photographing object, and selects an object image of the group of images as a target object image. Then, the background area is photographed again, and the clear background image is selected as the target background image. At this time, as shown in fig. 5, the target object image and the target background image may be displayed in a photographing result display interface (as shown by reference numeral 501) (as shown by reference numeral 502). After the user presses the target background image for a long time, a blurring-degree custom interface may be displayed (as shown by reference numeral 503). The blurring degree custom interface may have displayed therein a "background clear does not blurring" control (as shown at reference numeral 504) and a "blurring strength" control (as shown at reference numeral 505). In the "blurring strength" control, a slider for setting the blurring degree may be displayed. If the background blurring is not needed, the user can click on the background clear blurring control. If background blurring is required, the user can complete by sliding the slider in the "blurring strength" control.
If the user selects the background definition without blurring control, the blurring degree configuration information can indicate that the blurring degree is zero, and blurring treatment with the blurring degree of zero can be performed on the clear target background image at the moment, so that the obtained blurring background image is the original clear target background image. If the user selects the blurring strength control to control the slide rail to be positioned at the center of the slide bar, the blurring degree configuration information can indicate that the blurring degree is 50%, and blurring treatment with the blurring degree of 50% can be performed on the clear target background image at the moment, so that the target background image after blurring is obtained. The blurring degree of the target background image is controlled through user interaction, so that the blurring degree of the target background image can be improved, the definition of the target background image can be improved, the blurring effect is further enriched, and the flexibility of blurring shooting is improved.
The user may input the target background image out of the N background images, slide up and down, slide left and right, or determine the blurring degree according to the sliding direction and/or the sliding distance, and the like, and the input is not limited to setting on the control.
According to the method provided by the embodiment of the application, N groups of images of N shooting objects are shot through the control camera, each group of images comprises an object image and a background image, and then a first input of a user is received, so that image synthesis is carried out on the target object image and the target background image in response to the first input, and a target blurring image is generated. Therefore, on one hand, under the condition that the camera is not increased, the continuous zooming of the single camera is controlled to shoot a plurality of images, and the generation of the blurring image is realized by combining the interactive operation of a user, so that the blurring shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by a focusing plane, the target blurring image meeting the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
It should be noted that, in the photographing method provided in the embodiment of the present application, the execution subject may be a photographing device, or a control module in the photographing device for executing the loading photographing method. In the embodiment of the present application, taking an example that the photographing device executes a loading photographing method, the photographing method provided in the embodiment of the present application is described.
As shown in fig. 6, the photographing apparatus 600 according to the present embodiment includes: a control unit 601, configured to control a camera to capture N groups of N images of N captured objects, each group of images including an object image and a background image; a first receiving unit 602, configured to receive a first input of a user; a generating unit 603 for generating a target virtual image by image-synthesizing the target object image and the target background image in response to the first input; wherein the target object image and the target background image are determined according to the first input, and N is a positive integer.
In some optional implementations of this embodiment, the method further includes: the second receiving unit is used for receiving second input of the user to N preview areas on the shooting preview interface; and a first determining unit configured to determine an object in each of the N preview areas as a shooting object in response to the second input. Thus, the user can flexibly select the shooting object in the shooting preview interface according to the requirement.
In some optional implementations of this embodiment, the method further includes: and the display unit is used for displaying N object images in the first area and displaying N background images in the second area. By displaying the object image and the background image in the divided areas, the user can conveniently distinguish the object image and the background image, and convenience is provided for the user to select the target object image and the target background image in a candidate manner.
In some optional implementations of this embodiment, the first receiving unit 602 is further configured to: receiving user input of at least one object image of the N object images and at least one background image of the N background images; the method further comprises the following steps: and a second determining unit configured to determine the at least one object image as a target object image and determine the at least one background image as a target background image. Therefore, the user can freely select the shooting object which is wanted to keep clear and the area which is wanted to be virtual, the selected shooting object is not needed to be in the same focusing plane, and the effect of image blurring is enriched.
In some optional implementations of this embodiment, the first receiving unit 602 is further configured to: receiving input of a user on a target background image in the N background images; the method further comprises the following steps: a third determining unit configured to determine, according to the first input, a target blurring degree of the target background image; the method further comprises the following steps: according to the target blurring degree, blurring the target background image; and performing image synthesis on the target object image and the target background image subjected to blurring processing to generate a target blurring image. The blurring degree of the target background image is controlled through user interaction, so that the blurring degree of the target background image can be improved, the definition of the target background image can be improved, the blurring effect is further enriched, and the flexibility of blurring shooting is improved.
In some optional implementations of this embodiment, the control unit 601 is further configured to focus and photograph each photographic object to obtain a first intermediate image; the method comprises the steps of picking up an image of a first area in the first intermediate image to obtain an object image, wherein the first area comprises an area surrounded by an object outline of the shooting object; and obtaining a background image of each shooting object based on the first intermediate image or the second intermediate image. The object image and the background image obtained by the method can clearly display different shooting objects and blurring effects of different degrees, and provide convenience for a user to distinguish and select shooting objects needing clear display and blurring effects.
In some optional implementations of this embodiment, the control unit 601 is further configured to extract an image of a second area in the first intermediate image, where the second area is an image area of the first intermediate image other than the first area, to obtain a background image. Therefore, for different shooting objects, background images with different blurring degrees can be obtained, and a user can flexibly select the background images with the needed blurring degrees. Or focusing and shooting a second area to obtain a third intermediate image, and matting out the image of the second area in the second intermediate image to obtain a background image, wherein the second area is an image area except the first area in the first intermediate image. On the basis, the blurring degree of the background image can be further adjusted, so that a user can obtain clear background images and background images with different blurring degrees, and blurring effects are further enriched.
The photographing device in the embodiment of the application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The photographing device in the embodiment of the application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The photographing device provided in the embodiment of the present application can implement each process implemented by the photographing device in the method embodiment of fig. 1 and 5, and in order to avoid repetition, a detailed description is omitted here.
The device provided by the embodiment of the application is used for receiving a first input of a user; responding to the first input, determining at least one shooting object in a shooting preview interface, and sequentially carrying out focusing shooting on each shooting object to obtain a preview image of each shooting object; receiving a second input from the user; in response to the second input, a virtual background image is acquired, and the preview image and the virtual background image are synthesized, so that a virtual image can be obtained. Therefore, on one hand, the mode of continuous zooming for multiple shooting can be adopted without increasing a camera, and the blurring shooting is realized by combining the interactive operation of a user, so that the blurring shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by a focusing plane, the target blurring image meeting the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
Optionally, the embodiment of the present application further provides an electronic device, including a processor 910, a memory 909, and a program or an instruction stored in the memory 909 and capable of running on the processor 910, where the program or the instruction realizes each process of the above-mentioned shooting method embodiment when executed by the processor 910, and the same technical effects can be achieved, and for avoiding repetition, a description is omitted herein.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: radio frequency unit 701, network module 702, audio output unit 703, input unit 704, sensor 705, display unit 706, user input unit 707, interface unit 708, memory 709, and processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 710 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 710 is configured to control the camera to capture N groups of N images of N captured objects, where each group of images includes an object image and a background image; the user input unit 707 is for receiving a first input of a user; the processor 710 is further configured to perform image synthesis on the target object image and the target background image in response to the first input, and generate a target virtual image; the target object image and the target background image are determined according to the first input, and N is a positive integer.
In the embodiment of the present application, the processor 710 controls the camera to capture N groups of N images of N captured objects, where each group of images includes an object image and a background image, and then the user input unit 707 receives a first input of a user, so that the processor 710 performs image synthesis on the target object image and the target background image in response to the first input, to generate a target virtual image. Therefore, on one hand, under the condition that the camera is not increased, the continuous zooming of the single camera is controlled to shoot a plurality of images, and the generation of the blurring image is realized by combining the interactive operation of a user, so that the blurring shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by a focusing plane, the target blurring image meeting the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
Optionally, the user input unit 707 is further configured to receive a second input from the user on N preview areas on the shooting preview interface; the processor 710 is further configured to determine, in response to the second input, an object in each of the N preview areas as a photographic object. Thus, the user can flexibly select the shooting object in the shooting preview interface according to the requirement.
Optionally, the display unit 706 is configured to display N object images in the first area and N background images in the second area. By displaying the object image and the background image in the divided areas, the user can conveniently distinguish the object image and the background image, and convenience is provided for the user to select the target object image and the target background image in a candidate manner.
Optionally, the user input unit 707 is further configured to receive user input of at least one object image of the N object images and at least one background image of the N background images; the processor 710 is further configured to determine the at least one object image as a target object image and the at least one background image as a target background image. Therefore, the user can freely select the shooting object which is wanted to keep clear and the area which is wanted to be virtual, the selected shooting object is not needed to be in the same focusing plane, and the effect of image blurring is enriched.
Optionally, the user input unit 707 is further configured to receive an input of a target background image from the N background images by a user; a processor 710, further configured to determine, according to the first input, a target blurring degree of the target background image; according to the target blurring degree, blurring the target background image; and performing image synthesis on the target object image and the target background image subjected to blurring processing to generate a target blurring image. The blurring degree of the target background image is controlled through user interaction, so that the blurring degree of the target background image can be improved, the definition of the target background image can be improved, the blurring effect is further enriched, and the flexibility of blurring shooting is improved.
Optionally, the processor 710 is further configured to focus and photograph each photographic object to obtain a first intermediate image; the image of a first area in the first intermediate image is scratched to obtain an object image, wherein the first area comprises an area surrounded by an object outline of the shooting object; and obtaining a background image of each shooting object based on the first intermediate image or the second intermediate image. The object image and the background image obtained by the method can clearly display different shooting objects and blurring effects of different degrees, and provide convenience for a user to distinguish and select shooting objects needing clear display and blurring effects.
Optionally, the processor 710 is further configured to extract an image of a second area in the first intermediate image, to obtain a background image, where the second area is an image area in the first intermediate image except the first area. Therefore, for different shooting objects, background images with different blurring degrees can be obtained, and a user can flexibly select the background images with the needed blurring degrees. Or, the processor 710 is further configured to focus and shoot a second area to obtain a third intermediate image, and scratch an image of the second area in the second intermediate image to obtain a background image, where the second area is an image area of the first intermediate image except the first area. On the basis, the blurring degree of the background image can be further adjusted, so that a user can obtain clear background images and background images with different blurring degrees, and blurring effects are further enriched.
It should be appreciated that in embodiments of the present application, the input unit 704 may include a graphics processor (Graphics Processing Unit, GPU) 7041 and a microphone 7042, with the graphics processor 7041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 710 may integrate an application processor that primarily processes operating systems, user interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the above-mentioned shooting method embodiment, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, the processor is used for running a program or instructions, implementing each process of the shooting method embodiment, and achieving the same technical effect, so as to avoid repetition, and no redundant description is provided herein.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), including several instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (9)

1. A photographing method, the method comprising:
controlling a single camera to focus and shoot each of N shooting objects respectively to obtain N groups of images, wherein each group of images comprises an object image and a background image, and N is a positive integer;
receiving a first input of a user to at least one object image of the N object images and at least one background image of the N background images;
determining the at least one object image as a target object image and the at least one background image as a target background image in response to the first input;
and performing image synthesis on the target object image and the target background image to generate a target blurring image.
2. The method of claim 1, wherein the controlling the single camera to focus and photograph each of the N photographic subjects separately, before obtaining N sets of images, further comprises:
receiving second input of a user to N preview areas on a shooting preview interface;
in response to the second input, an object in each of the N preview areas is determined as a photographic object.
3. The method of claim 1, wherein the controlling the single camera to focus and photograph each of the N photographic subjects separately, after obtaining N sets of images, further comprises:
N object images are displayed in a first area, and N background images are displayed in a second area.
4. The method of claim 1, wherein the image synthesizing the target object image and the target background image, before generating a target virtual image, further comprises:
determining a target blurring degree of the target background image according to the first input;
the image synthesis of the target object image and the target background image to generate a target virtual image comprises the following steps:
according to the target blurring degree, blurring the target background image;
and performing image synthesis on the target object image and the target background image subjected to blurring processing to generate a target blurring image.
5. The method of claim 1, wherein controlling the single camera to focus and photograph each of the N photographic subjects separately to obtain N sets of images comprises:
focusing and shooting each shooting object to obtain a first intermediate image;
the image of a first area in the first intermediate image is scratched to obtain an object image, wherein the first area comprises an area surrounded by an object outline of the shooting object;
And obtaining a background image of each shooting object based on the first intermediate image.
6. The method according to claim 5, wherein the obtaining a background image of each photographic subject based on the first intermediate image includes:
the image of a second area in the first intermediate image is scratched to obtain a background image, wherein the second area is an image area except the first area in the first intermediate image;
or focusing and shooting a second area to obtain a second intermediate image, and matting the image of the second area in the second intermediate image to obtain a background image, wherein the second area is an image area except the first area in the first intermediate image.
7. A photographing apparatus, the apparatus comprising:
the control unit is used for controlling a single camera to focus and shoot each of N shooting objects respectively to obtain N groups of images, wherein each group of images comprises an object image and a background image, and N is a positive integer;
a first receiving unit configured to receive a first input of a user to at least one of the N object images and at least one of the N background images;
A second determining unit configured to determine the at least one object image as a target object image and the at least one background image as a target background image in response to the first input;
and the generating unit is used for carrying out image synthesis on the target object image and the target background image to generate a target blurring image.
8. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the shooting method of any of claims 1-6.
9. A readable storage medium, wherein a program or instructions is stored on the readable storage medium, which when executed by a processor, implements the steps of the shooting method according to any one of claims 1-6.
CN202111449935.4A 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium Active CN114025100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111449935.4A CN114025100B (en) 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111449935.4A CN114025100B (en) 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114025100A CN114025100A (en) 2022-02-08
CN114025100B true CN114025100B (en) 2024-04-05

Family

ID=80067410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111449935.4A Active CN114025100B (en) 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114025100B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710624A (en) * 2022-04-24 2022-07-05 维沃移动通信有限公司 Photographing method and photographing apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165488A1 (en) * 2015-09-18 2016-10-20 中兴通讯股份有限公司 Photo processing method and device
CN107613203A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN111246106A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Image processing method, electronic device, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165488A1 (en) * 2015-09-18 2016-10-20 中兴通讯股份有限公司 Photo processing method and device
CN107613203A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN111246106A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Image processing method, electronic device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN114025100A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN113766129A (en) Video recording method, video recording device, electronic equipment and medium
CN112714257B (en) Display control method, display control device, electronic device, and medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114390201A (en) Focusing method and device thereof
CN112532881A (en) Image processing method and device and electronic equipment
CN113329172A (en) Shooting method and device and electronic equipment
CN112669381A (en) Pose determination method and device, electronic equipment and storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN112449110B (en) Image processing method and device and electronic equipment
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN112702518B (en) Shooting method and device and electronic equipment
CN111953907B (en) Composition method and device
CN112653841B (en) Shooting method and device and electronic equipment
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN114390206A (en) Shooting method and device and electronic equipment
CN114125226A (en) Image shooting method and device, electronic equipment and readable storage medium
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN112788239A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant