CN112584040B - Image display method and device and electronic equipment - Google Patents

Image display method and device and electronic equipment Download PDF

Info

Publication number
CN112584040B
CN112584040B CN202011401079.0A CN202011401079A CN112584040B CN 112584040 B CN112584040 B CN 112584040B CN 202011401079 A CN202011401079 A CN 202011401079A CN 112584040 B CN112584040 B CN 112584040B
Authority
CN
China
Prior art keywords
image
area
perspective
input
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011401079.0A
Other languages
Chinese (zh)
Other versions
CN112584040A (en
Inventor
林德钊
杨其豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011401079.0A priority Critical patent/CN112584040B/en
Publication of CN112584040A publication Critical patent/CN112584040A/en
Application granted granted Critical
Publication of CN112584040B publication Critical patent/CN112584040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image display method and device and electronic equipment, belongs to the technical field of terminals, and can solve the problem that the efficiency of shooting a picture with a perspective effect by the electronic equipment is low. The method comprises the following steps: receiving first input of a user to a first area of a first image displayed in a shooting preview interface, wherein the first image is a preview image of a first object and a second object acquired by a first camera, the first object covers partial area of the second object, and the area where the first object is located comprises the first area; and responding to the first input, displaying a target image in the first area, wherein the target image is an image of a second area corresponding to the position of the first area in a second image, and the second image is an image of the first object and the second object acquired by the second camera. The embodiment of the application is applied to the process of shooting the picture with the perspective effect.

Description

Image display method and device and electronic equipment
Technical Field
The application belongs to the technical field of terminals, and particularly relates to an image display method and device and electronic equipment.
Background
With the popularization of electronic devices, the shooting functions of the electronic devices are increasingly diversified. For example, a user can take a picture with a special perspective effect through a camera function of the electronic device. Specifically, in a certain shooting scene, a user can shoot for multiple times through a camera of the electronic device to obtain multiple pictures, and then the user can edit the pictures blocking the scene and the pictures without blocking through image editing software in the later period to obtain the pictures with the perspective special effect.
However, in the above process, since the user needs to take pictures many times and synthesize the pictures through the image editing software, the pictures with the special perspective effect can be obtained, so that the operation of the user is complicated and time-consuming, and the efficiency of taking the pictures with the perspective effect by the electronic device is low.
Disclosure of Invention
The embodiment of the application aims to provide an image display method and device and an electronic device, which can improve the efficiency of shooting a picture with a perspective effect by the electronic device.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image display method, including: receiving first input of a user to a first area of a first image displayed in a shooting preview interface, wherein the first image is a preview image of a first object and a second object acquired by a first camera, the first object shields partial area of the second object, and the area where the first object is located comprises the first area; and responding to the first input, displaying a target image in the first area, wherein the target image is an image of a second area corresponding to the position of the first area in a second image, and the second image is an image of the first object and the second object acquired by the second camera.
In a second aspect, an embodiment of the present application provides an image display apparatus, including: the device comprises a receiving module and a display module. The receiving module is used for receiving first input of a user to a first area of a first image displayed in a shooting preview interface, the first image is a preview image of a first object and a second object acquired by a first camera, the first object covers partial area of the second object, and the area where the first object is located comprises the first area. And the display module is used for responding to the first input received by the receiving module, displaying a target image in the first area, wherein the target image is an image of a second area corresponding to the position of the first area in the second image, and the second image is an image of the first object and the second object acquired by the second camera.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, in a case that a first object blocks a partial area of a second object, the electronic device can shoot the first object and the second object through both the first camera and the second camera, because two cameras of the electronic device are located at different positions on the electronic device body, that is, shooting angles of the two cameras are different, the electronic device can acquire images of a blocking object (that is, the first object) and a blocked main object (that is, the second object) through the two cameras from different angles, so as to realize that all images of the blocked main object are acquired through the two cameras, so that a user can perform a first input on a first area in a first image acquired by one camera (for example, the first camera) in a shooting preview interface, so that the electronic device displays an image of the blocked part of the main object on an image area corresponding to the blocking object, therefore, the image of the main object is completely displayed, the picture with the perspective effect can be obtained by shooting once, the operation of a user is simplified, and the efficiency of shooting the picture with the perspective effect by the electronic equipment is improved.
Drawings
Fig. 1 is a schematic diagram of an image display method according to an embodiment of the present disclosure;
fig. 2 is an example schematic diagram of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 4 is a third schematic diagram of an example of an interface of a mobile phone according to an embodiment of the present disclosure;
fig. 5 is a second schematic diagram of an image display method according to an embodiment of the present disclosure;
fig. 6 is a third schematic diagram of an image display method according to an embodiment of the present application;
fig. 7A is a fourth schematic view of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 7B is a fifth schematic view of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 8 is a sixth schematic view of an example of an interface of a mobile phone according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image display device according to an embodiment of the present application;
fig. 10 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 11 is a second schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The embodiment of the application provides a shooting scheme capable of realizing perspective effect, and different positions of a plurality of cameras of electronic equipment on an electronic equipment body are utilized to enable shooting angles of the plurality of cameras to be different, so that images of shelters and sheltered main objects are collected from different angles through the plurality of cameras, and the perspective effect is quickly and conveniently realized.
For example, the region a where the main object is captured by the main camera is blocked by the object 1, that is, the image of the region a where the main object is not captured by the main camera, and the image captured by the main camera includes a partial image of the main object (that is, an image of the main object other than the image of the region a) and an image of the object 1; in order to enable the image of the area a to be displayed, in the embodiment of the application, the image of the main object and the image of the object 1 are acquired through the auxiliary camera (which is different from the main camera in position and different in shooting angle), that is, the image of the area a of the main object can be acquired through the auxiliary camera, then the image of the area a acquired by the auxiliary camera can be displayed in the area corresponding to the area a in the image of the object 1 acquired by the main camera by the electronic device, so that the image of the main object can be completely displayed in a perspective manner by supplementing the image of the blocked part of the main object.
An embodiment of the present application provides an image display method, and fig. 1 shows a flowchart of the image display method provided in the embodiment of the present application, where the method may be applied to an electronic device. As shown in fig. 1, the image display method provided in the embodiment of the present application may include steps 201 and 202 described below.
Step 201, the electronic device receives a first input of a user to a first area of a first image displayed in a shooting preview interface.
In this embodiment of the application, the first image is a preview image of a first object and a second object acquired by a first camera, the first object blocks a partial area of the second object, and an area where the first object is located includes the first area.
In this embodiment of the application, the shooting preview interface includes a first image acquired by a first camera, where the first image includes an image of a first object and a partial image of a second object.
In this embodiment of the application, after a user opens a camera application, an electronic device displays a default shooting preview interface, and then the user may select a perspective special effect mode, so that the electronic device displays a shooting preview interface in the perspective special effect mode), so that the user may align a camera of the electronic device with an object to be shot (for example, a first object and a second object), where the first object is located in a front position of the second object (and there is a spatial distance between the first object and the second object), and block a partial area of the second object, so that the electronic device may collect an image (i.e., a first image) of the object to be shot through a plurality of cameras (for example, the first camera and the second camera); because the position of first camera and second camera on the electronic equipment body is different, then the shooting angle of first camera and second camera is different, thereby when the object of waiting to shoot is gathered to first camera, can not gather the partial image of being sheltered from of second object, can gather through the second camera, and when the object of waiting to shoot is gathered to the second camera, can not gather the partial image of being sheltered from of second object, also gather through first camera, be directed against first camera and second camera promptly, the partial difference of being sheltered from of the second object of catching, so can be complete gather the image of second object through two cameras.
The first region is a whole region or a partial region of a region where the first object is located. The image of the blocked partial region of the second object is absent in the first image (i.e. not captured by the first camera) and present in the second image (i.e. capable of being captured by the second camera). The area in which the first object is located may be understood as: an image area in which the first object is located in the first image.
Optionally, in this embodiment of the application, after the electronic device collects images through the first camera and the second camera, the images collected by the first camera may be displayed in a shooting preview interface, so that a user can operate the images collected by the first camera to realize a perspective effect.
Optionally, in this embodiment of the application, the shooting preview interface further includes a second image, where the second image is an image of the first object and an image of the second object acquired by the second camera, and the second image includes an image of the first object and a partial image of the second object.
The first image and the second image are displayed in a shooting preview interface in a split screen mode, or the first image is covered on the second image in the shooting preview interface; the first image and the second image are images of different shooting angles shot for the same shooting object, and the display position of the image of the first object in the first image is different from that of the image of the first object in the second image.
It can be understood that the electronic device can also display the image acquired by the first camera and the image acquired by the second camera in the shooting preview interface in a split screen mode; or the electronic device may also superimpose the first image and the second image on the shooting preview interface in a multi-layer manner, where the layer of the first image is located on the top layer, and the layer of the second image is located below the layer of the first image.
Optionally, in this embodiment of the application, after the electronic device displays the image captured by the first camera and the image captured by the second camera in a split screen manner, a user may perform input through one of the two screen areas to select the image captured by the first camera or the image captured by the second camera as a main picture.
It should be noted that, an image acquired by one camera as a main picture can be understood as: the image collected by one camera is the image with the subsequent perspective effect, and the image collected by the other camera is used for assisting the image collected by the one camera, namely supplementing the image of the shielded part which is not collected by the one camera.
The electronic device is exemplarily illustrated as a mobile phone. As shown in fig. 2, before photographing, the user may put an object (e.g., a cylindrical object) for realizing a perspective effect in front of a photographing subject (e.g., a girl), in the perspective special effect mode, the mobile phone can simultaneously open two cameras to view a view, and display (for example, in an up-down split screen manner) two preview interfaces in a split screen manner (one preview interface includes images acquired by a first camera (for example, images of an acquired cylindrical object and a girl), the other preview interface includes images acquired by a second camera (for example, images of an acquired cylindrical object and a girl), the images in the two preview interfaces are images of the same shooting object (i.e., images of a cylindrical object and a girl) at different shooting angles), because the shooting angles of the two cameras are different, the blocked part in the preview image of the first camera can be seen in the preview image collected by the second camera; the user can then select an image of any one of the preview screens as the main screen 11 for shooting and an image of the other preview screen as the sub-screen 12 for complementing the image of the blocked portion in the main screen 11.
Optionally, in this embodiment of the application, the first input may be an input of a user to the perspective area frame in the first image. For example, the first input is specifically a dragging input of the user on the image of the first object to drag a marquee (perspective area frame) on the image of the first object, so as to take an area in the marquee as a first area, namely a perspective area (i.e. an area realizing perspective effect), for subsequently displaying an image corresponding to the occluded part of the second object.
Optionally, in this embodiment of the application, the marquee may perform position shifting or size scaling to perform rearrangement, that is, to determine the perspective area again.
Optionally, in this embodiment of the application, for the above-mentioned split screen mode, after the user drags out one marquee in the main screen through the drag input, the electronic device may also display one marquee in the sub-screen (i.e., the perspective screen frame described in the following embodiment), where the size of the marquee in the sub-screen is the same as the size of the marquee in the main screen, and the position of the marquee in the sub-screen corresponds to the position of the marquee in the main screen, and the user may also perform position shifting or size adjustment on the marquee in the sub-screen, so that the marquee in the sub-screen matches the occluded part of the main screen, and thus the electronic device determines the image in the marquee in the sub-screen as the image to be displayed in the first area (i.e., the target image described in the following embodiment).
In the embodiment of the application, a user can customize the position of the perspective area in the main picture to flexibly select the perspective area according to the use requirement of the user, and the user can flexibly adjust the image to be displayed in the perspective area in the auxiliary picture by dragging the perspective picture frame in the auxiliary picture, so that the perspective effect of the image is realized, and the flexibility of image display is improved.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, the user may make a first input in the main screen 11 displayed in divided screen to drag out one marquee 13 as a perspective region on the image of the first object in the main screen 11, and cause the cellular phone to display one marquee 14 in the region corresponding to the marquee in the main screen 11 in the sub-screen 12. Wherein the marquee 13 in the main picture 11 is used for displaying the image in the marquee 14 in the sub-picture 12, and the marquee 14 in the sub-picture is used for extracting the image to be displayed in the marquee 13 in the main picture 11 from the sub-picture 12.
The image display method provided by the embodiment of the application can be applied to the situation that the images of the shielding object and the shielded main object are collected from different angles through the two cameras, and the images collected by the two cameras are displayed in a split screen mode, a user can select and input the perspective area on the image collected by one camera, so that the electronic equipment can mark the image to be displayed in the perspective area on the image collected by the other camera synchronously, the user can check the image to be displayed in the perspective area and determine whether to adjust the image to be displayed in the perspective area on the image collected by the other camera, and therefore the perspective effect of the image is achieved and the flexibility of image display is improved.
Optionally, in this embodiment of the application, for the multi-layer mode, the first image located at the top layer is a main image, and the second image located at the bottom layer is a sub-image, after the user drags out a marquee on the first image through a drag input, because the first image is superimposed on the second image, the electronic device may determine, according to a position of the marquee in the first image, an image in an area (i.e., a second area) corresponding to the position in the second image as an image to be displayed in the first area.
It should be noted that the first image is located at the top layer, the second image is located at the bottom layer, the sub-picture at the bottom layer can be moved, and the user can move the sub-picture (i.e. the second image) located at the bottom layer, so that the perspective area of the main picture can display the image of any area in the sub-picture.
For example, as shown in fig. 4, before shooting, a user may put an object (e.g., a guideboard) that realizes a perspective effect before shooting a subject (e.g., a girl), and in the perspective special effect mode, the mobile phone may simultaneously open two cameras to view and display images captured by the two cameras in a multi-layer manner (e.g., a first image (i.e., a main picture) of a top layer is an image captured by the first camera (e.g., an image captured by a cylindrical object and a girl), and an image captured by a bottom layer is an image captured by the second camera (e.g., an image captured by a cylindrical object and a girl), where the images captured by the two cameras are images captured at different shooting angles for the same shooting object (e.g., a cylindrical object and a girl); the user may make a first input in the first image at the top level to drag out one marquee 15 on the image of the first object in the main screen as a see-through area (e.g., the entire image area of the box-selected guideboard) to display the image of an arbitrary area in the sub-screen in the see-through area.
The shape of the marquee may be rectangular, circular, elliptical, irregular, or the like. The specific method can be determined according to actual use requirements, and is not limited in the embodiments of the present application.
The image display method provided by the embodiment of the application can be applied to the scene that the images of the shielding object and the shielded main object are collected from different angles through the two cameras, and the images collected by the two cameras are displayed in a multi-layer mode, a user can select the perspective area on the image (namely the main picture) positioned at the top layer and drag the image on the screen to move the auxiliary picture at the bottom layer, so that the perspective area of the main picture can display the image of any area in the auxiliary picture, and the flexibility of image display is improved while the perspective effect of the image is realized.
Step 202, the electronic device responds to the first input and displays the target image in the first area.
In this embodiment of the application, the target image is an image corresponding to the first region in an image of the second object acquired by the second camera.
It can be understood that the target image may be an image in a marquee in a sub-picture displayed in a split screen manner; or, the target image may be an image in an area corresponding to the position of the marquee in the main picture at the top layer in the sub picture at the bottom layer in the multi-picture layer; or, the target image may be an image obtained by adjusting a marquee in the sub-picture of the split screen by the user through input; alternatively, the target image may be an image in an area corresponding to the position of the marquee in the main screen after the user moves the sub-screen of the lower layer by an input.
Optionally, in this embodiment of the application, the first input is input by a user to the perspective area frame in the first image. Before the step 202 of displaying the target image in the first area, the image display method provided in the embodiment of the present application further includes the following step 301, and the step 202 may be specifically realized by the following step 202 a.
Step 301, in response to the first input, the electronic device displays a perspective area frame at a position corresponding to the first area to mark the first area.
It should be noted that, for the related description of the perspective area frame, reference may be made to the description of the selection frame in step 201 in the foregoing embodiment, and details are not described here again.
Step 202a, the electronic device displays the target image on the first image in a perspective area frame in an overlapping manner.
In the embodiment of the application, when the electronic device displays the target image in the first area, the target image is displayed in the perspective area frame in a superposition manner so as to cover the initial image displayed in the first area (i.e. the image in the first area in the first image).
Optionally, in an implementation manner of the embodiment of the present application, the first image and the second image are displayed in a shooting preview interface in a split screen manner. Referring to fig. 1, as shown in fig. 5, before "displaying the target image in the first area" in step 202, the image display method provided in the embodiment of the present application further includes steps 401 and 402 described below.
Step 401, in response to the first input, the electronic device displays a perspective area frame at a position corresponding to the first area, and synchronously displays a perspective picture frame at a position corresponding to the second area.
In the embodiment of the application, while displaying the perspective region frame to mark the first region, the electronic device may display the perspective screen frame in a position corresponding to the first region in the second image, so as to prompt the user of an image (i.e., a target image) to be displayed in the first region.
Step 402, the electronic device determines an image in the perspective picture frame as a target image.
When step 401 and step 402 are executed, step 202 is specifically: the electronic device displays the target image in the first area.
Optionally, in this embodiment of the present application, with reference to fig. 5, after step 202 described above, the image display method provided in this embodiment of the present application further includes step 501 and step 502 described below.
Step 501, the electronic device receives a second input of the user to the perspective picture frame.
In the embodiment of the application, the images in the perspective picture frames at different positions are different. The second input is a dragging input/moving input of the user to the perspective picture frame on the second image so as to trigger the electronic equipment to re-determine the image to be displayed in the first area.
Step 502, the electronic device, in response to the second input, moves the perspective screen frame to the target position in the second image, and replaces the target image with the image in the perspective screen frame after the movement to the target position.
In this embodiment, the electronic device may determine the end position of the second input as the target position, and move the perspective screen frame to the target position, and then the electronic device may delete the target image displayed in the first area, so as to display the image in the perspective screen frame when the perspective screen frame is located in the target position in the first area.
In the embodiment of the application, through the camera in different positions, gather the image that shelters from the thing and the main object that is sheltered from different angles to show the image that different cameras were gathered through the split screen mode, thereby make the user adjust the perspective picture frame in the vice picture, so that electronic equipment can show the image in the perspective area in the perspective picture frame after the adjustment, thereby realize the perspective effect in the regional that is sheltered from, promote user's the experience of shooing.
Optionally, in another implementation manner of the embodiment of the present application, in the shooting preview interface, the first image is overlaid on the second image. Referring to fig. 1, as shown in fig. 6, before "displaying the target image in the first area" in step 202, the image display method provided in the embodiment of the present application further includes step 601 and step 602 described below, and step 202 may be specifically realized by step 202b described below.
Step 601, the electronic device responds to the first input, and displays a perspective area frame at a position corresponding to the first area.
It should be noted that, for the method for displaying the perspective area frame at the position corresponding to the first area, reference is made to the relevant description in the foregoing embodiment, and details are not repeated here.
Step 602, the electronic device receives a third input of the user on the second image in the shooting preview interface.
Optionally, in this embodiment of the application, the third input may be a drag input by a user to the sub-screen (i.e., the second image) located at the bottom layer, so as to move the second image, so that the perspective area of the main screen may display an image of any area in the sub-screen; or, the third input may be an input of a user to a movement control in the shooting preview interface, and then the user performs a sliding input in the shooting preview interface to move the second image, so that the perspective area of the main screen can display an image in any area in the sub-screen.
Step 202b, the electronic device responds to the third input, moves the second image, and updates the image in the perspective area frame in the second image to the first area in real time for displaying.
In the embodiment of the application, in the process of moving the second image by the user, the image in the perspective area frame is changed in real time; the electronic device can acquire the image positioned in the frame of the perspective area in real time during the movement of the second image by the user, so as to display the image positioned in the frame of the perspective area in the first area (namely the target image) in the second image in real time.
For example, in conjunction with fig. 4, as shown in fig. 7A, the user may perform a drag input on the sub-screen (i.e., the second image) located at the bottom layer to move the second image, and during the moving of the second image, the mobile phone may update the image (indicated by the rectangular frame 16 in fig. 7A, i.e., the image corresponding to the rectangular frame 16 in real time) located in the perspective area frame in the second image to the perspective area 15 in real time for displaying, and the user may view the image located in the perspective area frame in the second image in real time in the perspective area 15; thus, after the user stops the drag input, the cellular phone may display an image corresponding to the rectangular frame 16 in the see-through area 15 of the home screen, as shown in fig. 7B.
In the embodiment of the application, the electronic device superimposes the first image on the second image in a multi-layer mode, so that the display size of the main picture is enlarged, the full screen is enlarged, the display effect of the main picture is improved, meanwhile, the user can move the second image, the perspective area of the main picture can display the image in any area in the auxiliary picture, and the perspective effect of the shielded area is realized.
The embodiment of the application provides an image display method, in a case that a first object blocks a partial area of a second object, an electronic device can shoot the first object and the second object through a first camera and a second camera, because two cameras of the electronic device are located at different positions on an electronic device body, that is, shooting angles of the two cameras are different, the electronic device can collect images of a blocking object (that is, the first object) and a blocked main object (that is, the second object) through the two cameras from different angles, so as to realize that all images of the blocked main object are collected through the two cameras, and a user can perform a first input on a first area in a first image collected by one camera (for example, the first camera) in a shooting preview interface, so that the electronic device displays a blocked partial image of the main object on one image area corresponding to the blocking object, therefore, the image of the main object is completely displayed, the picture with the perspective effect can be obtained by shooting once, the operation of a user is simplified, and the efficiency of shooting the picture with the perspective effect by the electronic equipment is improved.
Optionally, in this embodiment of the present application, after step 201 described above, the image display method provided in this embodiment of the present application further includes step 701 described below.
Step 701, the electronic device updates the transparency of the image in the first area in the first image.
Optionally, in this embodiment of the application, the electronic device may increase the transparency of the image in the first image located in the first region, so that the transparency of the image in the first image located in the first region is gradually increased, so that the target image displayed in the first region can display the missing region in the image of the second object in the first image, so as to supplement the image of the complete second object, that is, display the target image through a perspective effect.
Optionally, in this embodiment of the application, the electronic device may adjust the transparency of the first region to a preset transparency, for example, 100%.
In addition, the execution sequence of the "displaying the target image in the first area" in the above step 701 and the above step 202 is not limited in the embodiment of the present application. In one implementation, the step 701 may be executed first, and then the step 202 of "displaying the target image in the first area" is executed, that is, the step 701 at this time specifically includes: the electronic device, in response to the first input, updates the transparency of the image located in the first area in the first image, where step 202 specifically is: the electronic device displays the target image in the first area. In another implementation manner, the step 202 of "displaying the target image in the first area" may be performed first, and then the step 701 may be performed. In still another implementation, the step of displaying the target image in the first area in the step 202 and the step 701 may be performed simultaneously.
Optionally, in this embodiment of the application, the user may manually adjust the transparency of the image in the first area in the first image. Before the step 701, the image display method provided in the embodiment of the present application further includes the following steps 801 to 803, and the step 701 may be specifically implemented by the following step 701 a.
Step 801, the electronic device receives a fourth input from the user.
Optionally, in this embodiment of the application, the fourth input may be an input (for example, a long-press input) to the first area by a user, so as to trigger the electronic device to display a first control, where the first control is used to adjust transparency of an image in the first area.
Optionally, in this embodiment of the application, the first control may be a scroll bar, and the user may perform a sliding input on the scroll bar to adjust the transparency of the image in the first area.
Illustratively, in conjunction with fig. 3, as shown in fig. 8, the user may make a fourth input to the marquee 13 in the main screen 11 to cause the mobile phone to display a first control (e.g., a scroll bar 17), so that the user makes a fifth input to the scroll bar 17 to cause the mobile phone to adjust the transparency of the image in the area (i.e., the first area) corresponding to the marquee 13.
In the embodiment of the application, a user inputs the first control to manually adjust the transparency of the image in the first area, so that the target image in the first area is displayed in the image missing area of the second object in the first image, and the image of the main object is completely displayed; in addition, the transparency of the image in the first area can be manually adjusted to any value by the user, so that the diversification effect of the image can be realized.
And step 802, the electronic equipment responds to the fourth input and displays the first control.
Step 803, the electronic device receives a fifth input of the first control from the user.
Optionally, in this embodiment of the application, the fifth input may be a sliding input of the first control by the user (for example, a sliding input to increase the transparency of the image in the first area, or a sliding input to decrease the transparency of the image in the first area).
In step 701a, the electronic device, in response to a fifth input, updates the transparency of the image in the first area in the first image to the transparency corresponding to the fifth input.
In the embodiment of the application, the adjustment range of the transparency of the image in the first area is 0 to 100%, and a user can input the first control to enable the electronic device to update the transparency of the image in the first area in the first image to any value of 0 to 100%, so that the image in the first area has a transparency effect, and a perspective effect of the image is achieved.
Optionally, in this embodiment of the application, after the electronic device displays the target image in the first area and implements the perspective effect, the user may input to the electronic device (for example, to a shooting control in a shooting preview interface) to trigger the electronic device to capture a picture with the perspective effect.
It should be noted that, in the image display method provided in the embodiment of the present application, the execution subject may be an image display apparatus, or a control module in the image display apparatus for executing the image display method. The embodiment of the present application describes an image display device provided in the embodiment of the present application, by taking an example in which the image display device executes an image display method.
Fig. 9 shows a schematic view of a possible structure of the image display device according to the embodiment of the present application. As shown in fig. 9, the image display device 70 may include: a receiving module 71 and a display module 72.
The receiving module 71 is configured to receive a first input of a user to a first area of a first image displayed in a shooting preview interface, where the first image is a preview image of a first object and a second object acquired by a first camera, the first object blocks a partial area of the second object, and an area where the first object is located includes the first area. And a display module 72, configured to display, in response to the first input received by the receiving module 71, a target image in the first area, where the target image is an image of a second area corresponding to the position of the first area in the second image, and the second image is an image of the first object and the second object captured by the second camera.
In one possible implementation, the first input is a user input to a frame of a perspective region in the first image. The display module 72 is further configured to display a perspective region frame at a position corresponding to the first region before the target image is displayed in the first region, so as to mark the first region. The display module 72 is specifically configured to display the target image on the first image in a manner of superimposing the target image on the first image in the perspective area frame.
In a possible implementation manner, the shooting preview interface further includes a second image. The first image and the second image are displayed in a shooting preview interface in a split screen mode, or the first image is covered on the second image in the shooting preview interface; the first image and the second image are images of different shooting angles shot for the same shooting object, and the display position of the image of the first object in the first image is different from that of the image of the first object in the second image.
In a possible implementation manner, the first image and the second image are displayed in a shooting preview interface in a split screen mode. The display module 72 is further configured to display a perspective region frame at a position corresponding to the first region before the target image is displayed in the first region, and synchronously display a perspective screen frame at a position corresponding to the second region. The image display device 70 provided in the embodiment of the present application further includes: and determining a module. The determining module is used for determining the image in the perspective picture frame as the target image.
In a possible implementation manner, the receiving module 71 is further configured to receive a second input of the perspective screen frame from the user after the display module displays the target image in the first area. The image display device 70 provided in the embodiment of the present application further includes: a moving module and a replacing module. Wherein the moving module is configured to move the perspective picture frame to the target position in the second image in response to the second input received by the receiving module 71. And the replacing module is used for replacing the target image with the image in the perspective picture frame after the target image is moved to the target position.
In one possible implementation, the first image is overlaid on the second image in the capture preview interface. The display module 72 is further configured to display a perspective area frame at a position corresponding to the first area before the target image is displayed in the first area. The receiving module 71 is further configured to receive a third input of the second image in the shooting preview interface from the user. The display module 72 is specifically configured to, in response to the third input received by the receiving module 71, move the second image, and update the image located in the perspective area frame in the second image to the first area in real time for display.
In one possible implementation manner, the image display device 70 provided in the embodiment of the present application further includes: and updating the module. The updating module is used for updating the transparency of the image in the first area in the first image.
In a possible implementation manner, the receiving module 71 is further configured to receive a fourth input from the user before the updating module updates the transparency of the image located in the first area in the first image. The display module 72 is further configured to display a first control for adjusting the transparency of the image in the first area in response to the fourth input received by the receiving module 71. The receiving module 71 is further configured to receive a fifth input to the first control from the user. The updating module is specifically configured to update the transparency of the image in the first area in the first image to the transparency corresponding to the fifth input in response to the fifth input received by the receiving module 71.
The embodiment of the application provides an image display apparatus, in a case that a first object blocks a partial area of a second object, an electronic device can shoot the first object and the second object through a first camera and a second camera, because two cameras of the electronic device are located at different positions on an electronic device body, that is, shooting angles of the two cameras are different, the electronic device can collect images of a blocking object (that is, the first object) and a blocked main object (that is, the second object) through the two cameras from different angles, so as to realize that all images of the blocked main object are collected through the two cameras, and a user can perform a first input on a first area in a first image collected by one camera (for example, the first camera) in a shooting preview interface, so that the electronic device displays a blocked partial image of the main object on one image area corresponding to the blocking object, therefore, the image of the main object is completely displayed, the picture with the perspective effect can be obtained by shooting once, the operation of a user is simplified, and the efficiency of shooting the picture with the perspective effect by the electronic equipment is improved.
The image display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in an electronic apparatus. The device can be mobile electronic equipment or non-mobile electronic equipment. Illustratively, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image display device provided by the embodiment of the application can realize each process realized by the method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
Optionally, as shown in fig. 10, an electronic device 90 provided in the embodiment of the present application further includes a processor 91, a memory 92, and a program or an instruction stored in the memory 92 and capable of running on the processor 91, where the program or the instruction is executed by the processor 91 to implement each process of the foregoing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 107 is configured to receive a first input of a user to a first area of a first image displayed in a shooting preview interface, where the first image is a preview image of a first object and a second object acquired by a first camera, the first object blocks a partial area of the second object, and an area where the first object is located includes the first area.
And the display unit 106 is configured to display, in response to the first input, a target image in the first area, where the target image is an image of a second area corresponding to the position of the first area in the second image, and the second image is an image of the first object and the second object captured by the second camera.
The embodiment of the application provides an electronic device, in a case that a first object blocks a partial area of a second object, the electronic device can shoot the first object and the second object through both a first camera and a second camera, because two cameras of the electronic device are located at different positions on an electronic device body, that is, shooting angles of the two cameras are different, the electronic device can collect images of a blocking object (that is, the first object) and a blocked main object (that is, the second object) through the two cameras from different angles, so as to realize that all images of the blocked main object are collected through the two cameras, so that a user can perform a first input on a first area in a first image collected by one camera (for example, the first camera) in a shooting preview interface, so that the electronic device displays an image of the blocked part of the main object on an image area corresponding to the blocking object, therefore, the image of the main object is completely displayed, the picture with the perspective effect can be obtained by shooting once, the operation of a user is simplified, and the efficiency of shooting the picture with the perspective effect by the electronic equipment is improved.
Optionally, in this embodiment of the application, the first input is input by a user to the perspective area frame in the first image. The display unit 106 is further configured to display a perspective region frame at a position corresponding to the first region before the target image is displayed in the first region, so as to mark the first region. The display unit 106 is specifically configured to display the target image on the first image in a superimposed manner within the perspective area frame.
Optionally, in this embodiment of the application, the shooting preview interface further includes a second image. The first image and the second image are displayed in a shooting preview interface in a split screen mode, or the first image is covered on the second image in the shooting preview interface; the first image and the second image are images of different shooting angles shot for the same shooting object, and the display position of the image of the first object in the first image is different from that of the image of the first object in the second image.
Optionally, in this embodiment of the present application, the first image and the second image are displayed in a split screen in the shooting preview interface. The display unit 106 is further configured to display a perspective area frame at a position corresponding to the first area before the target image is displayed in the first area, and synchronously display a perspective screen frame at a position corresponding to the second area. And a processor 110 for determining the image in the perspective picture frame as the target image.
Optionally, in this embodiment of the application, the user input unit 107 is further configured to receive a second input of the user to the perspective screen frame after the target image is displayed in the first area. The processor 110 is further configured to move the perspective screen frame to the target position in the second image in response to the second input, and replace the target image with the image in the perspective screen frame after the movement to the target position.
Optionally, in this embodiment of the application, in the shooting preview interface, the first image is overlaid on the second image. The display unit 106 is further configured to display the perspective area frame at a position corresponding to the first area before the target image is displayed in the first area. A user input unit 107, further configured to receive a third input of the second image by the user in the shooting preview interface. The display unit 106 is specifically configured to move the second image in response to a third input, and update an image located in the perspective area frame in the second image to the first area in real time for display.
Optionally, in this embodiment of the application, the processor 110 is further configured to update a transparency of an image in the first image, where the image is located in the first area.
Optionally, in this embodiment of the application, the user input unit 107 is further configured to receive a fourth input from the user before updating the transparency of the image located in the first area in the first image. The display unit 106 is further configured to display, in response to the fourth input, a first control, where the first control is used to adjust transparency of the image in the first area. The user input unit 107 is further configured to receive a fifth input of the first control from the user. The processor 110 is specifically configured to, in response to a fifth input, update the transparency of the image located in the first area in the first image to the transparency corresponding to the fifth input.
The electronic device provided by the embodiment of the application can realize each process realized by the method embodiment, and can achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An image display method, characterized in that the method comprises:
receiving a first input of a user to a first area of a first image displayed in a shooting preview interface, wherein the first image is a preview image of a first object and a second object acquired by a first camera, the first object covers partial area of the second object, and the area where the first object is located comprises the first area;
in response to the first input, updating the transparency of an image in the first image, which is located in the first area, and displaying a target image in the first area through a perspective effect, wherein the target image is an image of a second area in a second image, which corresponds to the position of the first area, and the second image is an image of the first object and the second object acquired by a second camera; the shooting angles of the first camera and the second camera are different, and the blocked part of the second object in the first image and the second image is different.
2. The method of claim 1, wherein the first input is a user input to a region-of-perspective box in the first image;
before the target image is displayed in the first area through the perspective effect, the method further comprises:
displaying the perspective area frame at a position corresponding to the first area to mark the first area;
the displaying the target image in the first area through the perspective effect comprises:
and displaying the target image on the first image in a superposition manner in the perspective area frame.
3. The method according to claim 1 or 2, wherein the second image is further included in the shooting preview interface;
the first image and the second image are displayed in a split screen mode in the shooting preview interface, or the first image is overlaid on the second image in the shooting preview interface;
the first image and the second image are images of the same shooting object at different shooting angles, and the display position of the image of the first object in the first image is different from the display position of the image of the first object in the second image.
4. The method of claim 3, wherein the first image is displayed in the capture preview interface in a split screen with the second image;
before the target image is displayed in the first area through the perspective effect, the method further comprises:
displaying a perspective area frame at a position corresponding to the first area, and synchronously displaying a perspective picture frame at a position corresponding to the second area;
and determining the image in the perspective picture frame as the target image.
5. The method of claim 4, wherein after displaying the target image in the first region through a perspective effect, the method further comprises:
receiving a second input of the user to the perspective picture frame;
in response to the second input, moving the perspective picture frame to a target position in the second image, and replacing the target image with the image in the perspective picture frame after the movement to the target position.
6. The method of claim 3, wherein in the capture preview interface, the first image is overlaid on the second image;
before the target image is displayed in the first area through the perspective effect, the method further comprises:
displaying a perspective area frame at a position corresponding to the first area;
receiving a third input of the second image in the shooting preview interface by a user;
the displaying the target image in the first area through the perspective effect comprises:
and responding to the third input, moving the second image, and updating the image in the perspective area frame in the second image to the first area in real time for displaying.
7. The method of claim 1, wherein before the updating the transparency of the image of the first image located in the first region, the method further comprises:
receiving a fourth input of the user;
in response to the fourth input, displaying a first control for adjusting a transparency of an image in the first area;
receiving a fifth input of the first control by a user;
the updating the transparency of the image in the first area in the first image comprises:
in response to the fifth input, updating a transparency of an image of the first image located in the first region to a transparency corresponding to the fifth input.
8. An image display device characterized by comprising: the device comprises a receiving module and a display module;
the receiving module is used for receiving first input of a user to a first area of a first image displayed in a shooting preview interface, wherein the first image is a preview image of a first object and a second object acquired by a first camera, the first object shields partial area of the second object, and the area where the first object is located comprises the first area;
the display module is configured to update the transparency of an image in the first area in the first image in response to the first input received by the receiving module, and display a target image in the first area through a perspective effect, where the target image is an image of a second area corresponding to the position of the first area in a second image, and the second image is an image of the first object and the second object acquired by a second camera; the shooting angles of the first camera and the second camera are different, and the blocked part of the second object in the first image and the second image is different.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image display method according to any one of claims 1 to 7.
CN202011401079.0A 2020-12-02 2020-12-02 Image display method and device and electronic equipment Active CN112584040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011401079.0A CN112584040B (en) 2020-12-02 2020-12-02 Image display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011401079.0A CN112584040B (en) 2020-12-02 2020-12-02 Image display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112584040A CN112584040A (en) 2021-03-30
CN112584040B true CN112584040B (en) 2022-05-17

Family

ID=75126988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011401079.0A Active CN112584040B (en) 2020-12-02 2020-12-02 Image display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112584040B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747058B (en) * 2021-07-27 2023-06-23 荣耀终端有限公司 Image content shielding method and device based on multiple cameras

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192547A1 (en) * 2014-06-18 2015-12-23 中兴通讯股份有限公司 Method for taking three-dimensional picture based on mobile terminal, and mobile terminal
CN109035185A (en) * 2018-06-29 2018-12-18 努比亚技术有限公司 A kind of image processing method and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791393B (en) * 2016-12-20 2019-05-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108650463B (en) * 2018-05-15 2019-11-26 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN110661978B (en) * 2019-10-29 2021-03-23 维沃移动通信有限公司 Photographing method and electronic equipment
CN111541845B (en) * 2020-04-30 2022-06-24 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015192547A1 (en) * 2014-06-18 2015-12-23 中兴通讯股份有限公司 Method for taking three-dimensional picture based on mobile terminal, and mobile terminal
CN109035185A (en) * 2018-06-29 2018-12-18 努比亚技术有限公司 A kind of image processing method and terminal

Also Published As

Publication number Publication date
CN112584040A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112135049B (en) Image processing method and device and electronic equipment
CN112492212B (en) Photographing method and device, electronic equipment and storage medium
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN112738402B (en) Shooting method, shooting device, electronic equipment and medium
CN113794829B (en) Shooting method and device and electronic equipment
CN112995500A (en) Shooting method, shooting device, electronic equipment and medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113329172B (en) Shooting method and device and electronic equipment
CN112637515B (en) Shooting method and device and electronic equipment
CN111953900B (en) Picture shooting method and device and electronic equipment
CN112312019A (en) Light supplementing method and device and electronic equipment
CN112584040B (en) Image display method and device and electronic equipment
CN112929566B (en) Display control method, display control device, electronic apparatus, and medium
CN112702531B (en) Shooting method and device and electronic equipment
CN111586305B (en) Anti-shake method, anti-shake device and electronic equipment
CN114143461B (en) Shooting method and device and electronic equipment
CN112333389B (en) Image display control method and device and electronic equipment
CN112492205B (en) Image preview method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN113286085A (en) Display control method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN113014799A (en) Image display method and device and electronic equipment
CN114285980B (en) Video recording method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant