CN115442509A - Shooting method, user interface and electronic equipment - Google Patents

Shooting method, user interface and electronic equipment Download PDF

Info

Publication number
CN115442509A
CN115442509A CN202110608969.7A CN202110608969A CN115442509A CN 115442509 A CN115442509 A CN 115442509A CN 202110608969 A CN202110608969 A CN 202110608969A CN 115442509 A CN115442509 A CN 115442509A
Authority
CN
China
Prior art keywords
area
camera
image
electronic device
preview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110608969.7A
Other languages
Chinese (zh)
Other versions
CN115442509B (en
Inventor
牛思月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110608969.7A priority Critical patent/CN115442509B/en
Publication of CN115442509A publication Critical patent/CN115442509A/en
Application granted granted Critical
Publication of CN115442509B publication Critical patent/CN115442509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a user interface and electronic equipment, wherein in the method, the electronic equipment can display one user interface, the user interface can comprise a plurality of preview areas, the preview areas are respectively used for displaying images acquired by a plurality of cameras, the electronic equipment can detect the operation of a user on one preview area, and in response to the operation, the electronic equipment adjusts the cutting area of the image acquired by the camera corresponding to the preview area, so as to change the composition effect of the image. Therefore, the influence of shooting angles among different cameras in a multi-scene shooting mode is weakened, a user can manually adjust the composition effect of the image, and richer picture information is obtained.

Description

Shooting method, user interface and electronic equipment
Technical Field
The present application relates to the field of terminal and communication technologies, and in particular, to a shooting method, a user interface, and an electronic device.
Background
Currently, a portable electronic device (such as a mobile phone, a tablet computer, etc.) is generally configured with a plurality of cameras, such as a front camera, a wide camera, a telephoto camera, etc. To bring about a further shooting authoring experience, more and more electronic devices can support simultaneous shooting by multiple cameras.
Disclosure of Invention
According to the method, the electronic equipment can adjust the images displayed by one or more cameras in the preview frame in the multi-scene shooting process according to the inclination angle of the electronic equipment, the image forming effect of the images is changed, the shooting angle influence among different cameras in the multi-scene shooting mode is weakened, and richer picture information is obtained.
In a first aspect, the present application provides a shooting method applied to an electronic device having M cameras, where M is greater than or equal to 2 and is a positive integer, the method including: the electronic equipment starts a first camera and a second camera in the M cameras; the electronic equipment displays a preview interface, wherein the preview interface comprises a first area and a second area, the first area displays partial images acquired by the first camera, and the second area displays all or partial images acquired by the second camera; the electronic equipment acquires the inclination angle of the electronic equipment; the electronic equipment detects a first operation in the first area, and in response to the first operation, the electronic equipment changes a first preview image displayed in the first area into a second preview image, wherein the first preview image and the second preview image are both obtained by cropping all images acquired by the first camera, and the position of the second preview image is different from the position of the first preview image in all images acquired by the first camera. When the inclination angle is larger than a preset angle, the first camera is a front camera; when the inclination angle is smaller than the preset angle, the first camera is a rear camera.
The method provided by the first aspect may be implemented, during the multi-view shooting, according to an operation of a user, changing a center position of the preview image, so as to change a composition effect of the preview image, and changing the preview area of the preview image is related to an inclination angle of the electronic device, because when the inclination angle of the electronic device is too large, the composition effect of the front camera is not good, and when the inclination angle of the electronic device is too small, the composition effect of the rear camera is not good.
With reference to the first aspect, in a possible implementation manner, the first preview image is obtained by the electronic device cropping all images collected by the first camera according to a first cropping area, and the second preview image is obtained by the electronic device cropping all images collected by the first camera according to a second cropping area, where a center position of the first cropping area coincides with a center position of an area where all images collected by the first camera are located, and a center position of the second cropping area is located below the center position of the first cropping area.
When the electronic equipment enters multi-scene shooting, the preview image displayed by default by the electronic equipment can be all images collected by the camera in a centered cutting mode, and then the electronic equipment displays the non-centered cut preview image, so that the adjustment effect of the composition effect is provided for a user.
In the multi-scene shooting process, due to the influence of the inclination angle of the electronic equipment, the main body of a shooting object is generally close to the lower half part of the image in the image collected by the camera.
With reference to the first aspect, in one possible implementation manner, the first cropping area is as large as the second cropping area.
With reference to the first aspect, in a possible implementation manner, the tilt angle of the electronic device includes any one of: the inclination angle of the electronic device to the horizontal plane and the inclination angle of the electronic device to the vertical plane.
With reference to the first aspect, in a possible implementation manner, the preset angle is 90 degrees.
With reference to the first aspect, in a possible implementation manner, the first operation includes a sliding operation, and in all images acquired by the first camera, a direction in which the center position of the second preview image points to the center position of the first preview image is the same as a sliding direction of the sliding operation.
Therefore, the user can manually adjust the composition effect of the image and change the content displayed in the user interface in the image acquired by the camera.
That is, the user can change the range displayed in the first area in the image captured by the first camera by the sliding operation.
Specifically, if the first operation is a left-slide operation, the second preview image is closer to the right boundary of all images captured by the first camera than the first preview image.
Specifically, if the first operation is a right-slide operation, the second preview image is closer to the left boundary of all images captured by the first camera than the first preview image.
With reference to the first aspect, in a possible implementation manner, before the electronic device detects the first operation in the first area, the method further includes: the electronic equipment displays first prompt information in the first area, and the first prompt information is used for prompting a user to change the range of partial or all images acquired by the first camera in the first area.
Therefore, the user can adjust the composition effect of the image according to the prompt information displayed in the electronic equipment, and the experience of the user is enhanced.
With reference to the first aspect, in a possible implementation manner, the preview interface further includes a first window, and the first window is located in the first area; the method further comprises the following steps: the electronic equipment displays all images acquired by a camera of the electronic equipment in real time in the first window; when the electronic equipment displays the second preview image, the electronic equipment displays a second cutting area in the first window, and the second cutting area is an area of the second preview image in all images acquired by the first camera.
The first window may be a viewing display window through which a user may observe a positional relationship between a preview image displayed by the electronic device and all images captured by the camera.
With reference to the first aspect, in a possible implementation manner, after the electronic device changes the first preview image displayed in the first area to the second preview image, the method further includes: the electronic equipment receives a second operation for photographing, and in response to the second operation, the electronic equipment saves the image displayed in the preview interface as a picture, wherein the picture comprises the second preview image.
Therefore, in the multi-scene shooting process, the user can store the image with the well-adjusted composition effect into a picture.
With reference to the first aspect, in a possible implementation manner, the electronic device receives a third operation for recording a video; responding to the third operation, the electronic equipment starts to record videos and displays a shooting interface, and the shooting interface comprises the N areas.
With reference to the first aspect, in one possible implementation manner, the electronic device detects that a fourth operation for stopping recording the video is performed; and in response to the fourth operation, the electronic equipment stops recording the video and generates a video file.
Therefore, in the multi-scene shooting process, the user can store the image with the well-adjusted composition effect as a video.
With reference to the first aspect, in a possible implementation manner, the method further includes: the electronic equipment displays a third preview image in the second area, and receives a fifth operation acting on the second area; in response to the fifth operation, the electronic equipment displays a fourth preview image in the second area; the third preview image and the fourth preview image are obtained by cutting out all images collected by the second camera, and the position of the third preview image is different from the position of the fourth preview image in all images collected by the second camera.
That is, besides the camera affected by the tilt angle, the preview image displayed in the user interface by other cameras can also receive the operation of the user to adjust the composition effect.
In a second aspect, the present application provides a shooting method applied to an electronic device having M cameras, where M is greater than or equal to 2 and is a positive integer, the method including: the electronic equipment starts a first camera and a second camera in the M cameras; the electronic equipment displays a preview interface, wherein the preview interface comprises a first area and a second area, the first area displays partial images acquired by the first camera, and the second area displays all or partial images acquired by the second camera; the electronic equipment acquires the inclination angle of the electronic equipment; the electronic equipment changes a first preview image displayed in the first area into a second preview image according to the inclination angle, the first preview image and the second preview image are obtained by cutting all images collected by the first camera, in all the images collected by the first camera, the position of the second preview image is different from that of the first preview image, and the larger the inclination angle is, the farther the distance between the second preview image and the first preview image is. When the inclination angle is larger than a preset angle, the first camera is a front camera; when the inclination angle is smaller than the preset angle, the first camera is a rear camera.
By implementing the method provided by the second aspect, the electronic device can automatically adjust the composition effect of the preview image according to the inclination angle of the electronic device in the multi-scene shooting process, and the experience of a user is improved.
With reference to the second aspect, in a possible implementation manner, after the electronic device changes the first preview image displayed in the first area to the second preview image, the method further includes: the electronic equipment displays second prompt information in the first area, and the second prompt information is used for prompting a user that the range of all or part of the images acquired by the first camera displayed in the first area is changed currently.
Therefore, the electronic equipment can display prompt information after the composition effect is adjusted, so as to prompt a user that the adjustment of the composition effect is finished at present, and enhance the experience of the user.
In a third aspect, the present application provides an electronic device, comprising: a display screen, M cameras, a touch sensor, a memory, one or more processors, a plurality of applications, and one or more programs; m is more than or equal to 2, and M is a positive integer; wherein the one or more programs are stored in the memory; the one or more processors, when executing the one or more programs, cause the electronic device to perform the method as described in the first aspect or any one of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to cause the computer device to perform the method as described in the first aspect or any one of the embodiments of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising instructions which, when run on an electronic device, cause the electronic device to perform the method as described in the first aspect or any one of the embodiments of the first aspect.
In a sixth aspect, the present application provides a computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method as described in the first aspect or any one of the implementation manners of the first aspect.
Drawings
Fig. 1 is a multi-view shooting scene provided in an embodiment of the present application;
fig. 2 is another multi-scene shooting scene provided in the embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of an electronic device 100 according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a software structure of an electronic device 100 according to an embodiment of the present disclosure;
FIGS. 5, 6A-6B illustrate some of the user interfaces provided by embodiments of the present application;
7A-7B are schematic diagrams illustrating a photographing method according to an embodiment of the present application;
FIGS. 8A-8E, and 9A-9E are some of the user interfaces provided by embodiments of the present application;
fig. 10 is an overall flow of a shooting method provided in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described in detail and clearly with reference to the accompanying drawings. Wherein in the description of the embodiments of the present application, "/" indicates an inclusive meaning, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of this application, a "plurality" means two or more unless indicated otherwise.
The term "User Interface (UI)" in the following embodiments of the present application is a media interface for interaction and information exchange between an application program or an operating system and a user, and implements conversion between an internal form of information and a form acceptable to the user. The user interface is source code written by java, extensible markup language (XML) and other specific computer languages, and the interface source code is analyzed and rendered on the electronic equipment and finally presented as content which can be identified by a user. A common presentation form of the user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations and displayed in a graphical manner. It may be a visual interface element such as text, an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. displayed in a display of the electronic device.
For ease of understanding, the related terms and concepts related to the embodiments of the present application will be described below.
(1) Viewing range
The field of view (FOV) refers to the range that the camera can cover. The viewing range of a camera is determined by the design of the optical system of the camera. The smaller the focal length, the larger the viewing range, and the larger the focal length, the smaller the viewing range. For example, wide-angle cameras have a large viewing range. The user may adjust the view of the camera by moving the electronic device.
(2) Cutting out area
The image displayed in the user interface by one camera is specifically the image in a certain clipping area in all the images collected by the camera. The larger the range of the cutting area is, the larger the range displayed in the interface in the image collected by the camera is, the smaller the range of the cutting area is, and the smaller the range displayed in the interface in the image collected by the camera is.
The relation between the viewing range and the cropping area then corresponds to the relation between the range in which all the images captured by the camera are located and the range in which all the images captured by the camera are finally displayed in the user interface.
(3) Preview area
The preview area refers to an area in the user interface for displaying an image acquired by the camera in real time. A camera may display all or part of the image captured by the camera in the corresponding preview area.
When the electronic device starts a plurality of cameras to simultaneously acquire images and displays the acquired images in the user interface, the user interface may include a plurality of preview areas, the plurality of preview areas correspond to the plurality of cameras one to one, and each preview area displays all or part of images acquired by the corresponding camera.
In addition, the electronic device may also determine the size of the cropping area according to the size of the preview area, for example, when the aspect ratio of the preview area is 16.
In order to enable an electronic device to simultaneously acquire images acquired by a plurality of cameras, one technique is multi-view shooting.
The multi-view shooting may refer to a shooting mode of the electronic device, in which the electronic device may start a plurality of cameras to simultaneously capture images, the plurality of cameras may include a front camera and a rear camera, a user interface of the electronic device may include a plurality of preview areas, and the images captured by the plurality of cameras may be displayed in the corresponding preview areas thereof. Therefore, when the electronic equipment receives the operation of photographing or recording by the user, the electronic equipment can show the effect of simultaneously photographing or recording by the plurality of cameras. In particular, during video preview or recording or during playing of recorded video, the display screen may simultaneously display multiple images from the multiple cameras on the same user interface. The multiple images can be displayed on the same interface in a splicing mode or in a picture-in-picture mode. In taking a preview, the display screen may simultaneously display multiple frames of images from the multiple cameras in the user interface. The multiple frames of images can be displayed in a splicing mode on the same interface or in a picture-in-picture mode.
In the embodiment of the present application, the multi-view shooting may include double-view shooting, double-view video recording, multi-view shooting, multi-view video recording, and the like, and the name "multi-view shooting" used in the embodiment of the present application does not limit the embodiment of the present application in any way.
The user uses the multi-view shooting mode to realize the in-process of autodyning and shooting external scenery simultaneously, and leading camera and rear camera can gather the image simultaneously, so, when the angle that the user adjustment was shot, the visual angle that leading camera and rear camera shot can receive the influence simultaneously.
Fig. 1 shows a multi-view shooting scene.
When the user is in order to obtain a better self-timer effect, the electronic equipment is often adjusted to a shooting angle of slightly overlooking the front camera.
As shown in fig. 1 (a), when a user adjusts an included angle between the electronic device and a horizontal plane to α (α < 90 °), and performs shooting in the multi-view shooting mode, the rear camera is at a view angle of upward shooting, and the front camera is at a view angle of downward shooting. Fig. 1 (b) schematically shows an image acquired at this shooting angle when the multi-view shooting mode is used. As shown in fig. 1 (b), the image a1 is an image obtained when the rear camera takes a subject in a tilted-up state, and the image a2 is an image obtained when the front camera takes a user in a tilted-down state. Therefore, the front camera can make the person in the image look small in view, so that a better self-photographing effect is obtained, meanwhile, the rear camera can shoot the scenery at the upward photographing angle, the viewing range of the rear camera is often higher than the horizontal viewing range of the user, so that the viewing effect of the rear camera is poor, the image collected by the rear camera is cut in the center, and the image finally presented in the user interface can not contain the complete scenery required to be obtained by the user, or the scenery main body is not located at the center of the corresponding image area (such as the area where the image a1 is located).
Fig. 2 shows another multi-shot scene.
When a user uses the rear camera to shoot a scene, the electronic equipment is often adjusted to a shooting angle of view slightly overlooking the rear camera.
As shown in fig. 2 (a), when the user adjusts the included angle between the electronic device and the horizontal plane to be β (β > 90 °), and performs shooting in the multi-view shooting mode, the rear camera is in the downward shooting view angle, and the front camera is in the upward shooting view angle. Fig. 2 (b) schematically shows an image acquired in the multi-view shooting mode at this shooting angle. As shown in fig. 2 (b), fig. b1 is an image obtained when the rear camera shoots a subject downward, and the image b2 is an image obtained when the front camera shoots a user downward. It can be seen that the rear camera can acquire a complete scene by using the angle of downward shooting, but at the same time, the front camera shoots the user at the angle of upward shooting, and when the user shoots the user by using upward shooting, the user may not obtain a good self-shooting effect, or the viewing range of the front camera is often higher than that of the electronic device perpendicular to the horizontal plane, the viewing range of the front camera is not good, so that after the image collected by the front camera is cut in the center, the facial body of the user is not located at the center of the corresponding image area (for example, the area where the image b2 is located).
It can be seen from the above two kinds of multi-view shooting scenes that, in the multi-view shooting mode, when the electronic device cannot take into account the simultaneous shooting of the front camera and the rear camera, the composition effect of figures and scenery in the image is reduced, when the user wants to present a better self-shooting effect, the shooting effect of the scenery is reduced, the user may not obtain a complete scenery composition effect, when the user wants to obtain a complete scenery composition effect, the self-shooting effect of the user is reduced, and the user may not obtain the composition effect that the face is located in the center of the image.
Therefore, how to combine the composition effect of images acquired by different cameras in corresponding preview areas in a multi-scene shooting mode is a problem to be solved urgently at present.
In the multi-scene shooting process, in order to enable the multi-channel images to be displayed in the user interface at the same time, the electronic equipment will tend to cut all the images collected by the camera, so that the cut multi-channel images can be spliced and displayed in the user interface, and the images displayed in the user interface are the images cut by the electronic equipment. That is to say, in the multi-view shooting process, the view taking range of the camera is larger than the cropping range of the image collected by the camera, and the user can make the image displayed by the electronic device contain the content that the user wants to obtain by adjusting the position of the cropping area of the image.
According to the derivation process, the embodiment of the application provides a shooting method. In the shooting method, the electronic device can start a plurality of cameras to shoot or record at the same time, the electronic device can display a user interface, the user interface can comprise a plurality of preview areas, the plurality of preview areas are respectively used for displaying images acquired by the plurality of cameras, meanwhile, the electronic device can detect an operation of a user on one of the preview areas, and in response to the operation, the electronic device adjusts a cutting area of the image acquired by the camera corresponding to the preview area, so as to change the composition effect of the image. In addition, the electronic equipment can also display prompt information in one or more preview areas in the user interface according to the inclination angle of the electronic equipment, and prompt the user to adjust the cutting area of the image in the preview area.
Generally speaking, the shooting method can adjust the content of the images collected by different cameras displayed in the user interface in the multi-scene shooting mode, weakens the shooting angle influence among different cameras in the multi-scene shooting process, and enables the user to manually adjust the composition effect of the images, thereby obtaining richer picture information. Meanwhile, the electronic equipment can also display prompt information according to the inclination angle of the electronic equipment, so that a user is prompted to adjust images corresponding to different cameras (for example, a front camera or a rear camera), and the experience of the user is improved.
The tilt angle of the electronic device refers to an angle between a plane on which a display screen of the electronic device is located and a horizontal plane, for example, an angle α shown in fig. 1 (a) or an angle β shown in fig. 2 (a). The user can change the shooting angle of the electronic equipment by changing the inclination angle and adjust the framing of the camera, so that different shooting effects are obtained.
The electronic equipment can display the prompt information in different preview areas according to the own inclination angle, wherein the prompt information is displayed in one or more preview areas in the user interface when the inclination angles are different. For example, when the electronic device detects that the tilt angle of the electronic device is greater than 90 °, the electronic device may display the prompt information in the preview area corresponding to the front camera, and when the electronic device detects that the tilt angle of the electronic device is less than 90 °, the electronic device may display the prompt information in the preview area corresponding to the rear camera. For the relationship between the tilt angle of the electronic device and the specific preview area, reference is made to the following embodiments, which are not repeated herein.
Fig. 3 shows a hardware configuration diagram of the electronic device 100.
The electronic device 100 may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) device, a Virtual Reality (VR) device, an Artificial Intelligence (AI) device, a wearable device, a vehicle-mounted device, a smart home device, and/or a smart city device, and the embodiment of the present application is not particularly limited to the specific type of the electronic device.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than illustrated, or some components may be combined, some components may be separated, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
In some embodiments, in a multi-shot scene, the processor 110 may synthesize multiple frames of images from multiple cameras 193. For example, multiple image streams from multiple cameras 193 are merged into one image stream.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In some embodiments, in a multi-view shooting scenario, the display screen 194 may display multiple images from multiple cameras 103 by stitching or picture-in-picture, so that the multiple images from the multiple cameras 193 may be presented to the user at the same time.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In some embodiments, camera 193 may be disposed on both sides of electronic device 100. A camera in the same plane as the display screen 194 of the electronic device may be referred to as a front-facing camera and a camera in the plane of the back cover of the electronic device may be referred to as a back-facing camera. The front camera may be used to collect an image of the photographer himself facing the display screen 194, and the rear camera may be used to collect an image of a photographic subject (e.g., a landscape) facing the photographer.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The internal memory 121 may include one or more Random Access Memories (RAMs) and one or more non-volatile memories (NVMs).
The random access memory may include static random-access memory (SRAM), dynamic random-access memory (DRAM), synchronous dynamic random-access memory (SDRAM), double data rate synchronous dynamic random-access memory (DDR SDRAM), such as fifth generation DDR SDRAM generally referred to as DDR5 SDRAM, and the like; the nonvolatile memory may include a magnetic disk storage device, a flash memory (flash memory).
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is at rest. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
In some embodiments, the gyro sensor 180B or the acceleration sensor 180E may be used for the electronic device 100 to acquire the tilt angle of itself.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 100 may utilize the distance sensor 180F to range to achieve fast focus.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation acting thereon or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The electronic device may be a portable terminal device equipped with iOS, android, microsoft, or other operating systems, such as a mobile phone, a tablet computer, a wearable device, or the like, and may also be a non-portable terminal device such as a Laptop computer (Laptop) with a touch-sensitive surface or a touch panel, a desktop computer with a touch-sensitive surface or a touch panel, or the like. The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the electronic device 100.
Fig. 4 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. And the layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 4, the application packages may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc.
In some embodiments, the content provider may be used to acquire and store images captured by a camera that may be accessed by a camera application.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to construct an application. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
In some embodiments, the view system may be used to build a user interface for a camera application, and in particular with respect to the user interface for the camera application, reference may be made to the user interface referred to in the UI embodiments described below.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a brief dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in a graphical or scrollbar text form in a status bar at the top of the system, such as a notification of a background running application, or may be a notification that appears on the screen in a dialog window. Such as prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide a fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, timestamp of the touch operation, and other information). The raw input events are stored in the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of the application framework layer, starts the camera application, further starts the camera drive by calling the kernel layer, and captures a still image or a video through the camera 193.
An exemplary user interface for an application menu on electronic device 100 is described below.
Fig. 5 illustrates an exemplary user interface 21 for an application menu on the electronic device 100.
As shown in fig. 5, the user interface 21 includes: status bar 211, calendar indicator 212, weather indicator 213, gallery application 214, camera application 215.
The status bar 211 may include, among other things, one or more signal strength indicators for mobile communication signals, one or more signal strength indicators for wireless fidelity (WiFi) signals, a battery status indicator, and a time indicator. Calendar indicator 212 may be used to indicate the current time. Weather indicator 213 may be used to indicate the weather type. The gallery application 214 may be used to save pictures taken by the electronic device 100, and the camera application 215 may be used to turn on a camera of the electronic device and provide a user interface for displaying images captured by the camera.
In some embodiments, the user interface 21 exemplarily shown in fig. 5 may be a home interface (Gome screen).
It is understood that fig. 5 is only an exemplary illustration of a user interface on the electronic device 100 and should not be construed as a limitation on the embodiments of the present application.
A typical shooting scenario to which the present application relates is described below: and (5) double-scene shooting scenes.
As shown in fig. 5, the electronic apparatus 100 may detect a touch operation by the user on the camera application 215, and in response to the operation, display the user interface 31 as shown in fig. 6A. The user interface 31 may be a user interface of a default photographing mode of the camera application, and may be used for a user to photograph through a default rear camera. The camera application is an application for image shooting on electronic equipment such as a smart phone and a tablet computer, and the name of the application is not limited in the application. That is, the user may open the user interface 31 of the camera application by clicking on the camera application 215 as shown in fig. 5. Without being limited thereto, the user may also open the user interface 31 in other applications, for example, the user clicks a shooting control in "WeChat" to open the user interface 31. The WeChat is a social application program and can support a user to share a shot photo with others.
Fig. 6A illustrates one user interface 31 of a camera application on an electronic device such as a smartphone. As shown in fig. 6A, the user interface 31 may include: a preview box 311, a shooting mode list 312, a gallery shortcut 313, a shutter control 314, a camera flip control 315. Wherein:
the preview pane 311 may be used to display images acquired by the camera 193 in real time. The electronic device 100 may refresh the display content therein in real-time to facilitate the user to preview the image currently captured by the camera 193.
One or more shooting mode options may be displayed in the shooting mode list 312. The one or more shooting mode options may include: a portrait mode option 312A, a video mode option 312B, a photograph mode option 312C, a dual view photograph mode option 312D, and a more option 312E. The one or more shooting mode options may be presented on the interface as textual information, such as "portrait", "video", "take", "double-shot", "more". Without limitation, the one or more shooting mode options may also appear as icons or other forms of Interactive Elements (IEs) on the interface.
The gallery shortcut 313 may be used to open a gallery application. In response to a user operation, such as a clicking operation, acting on the gallery shortcut 313, the electronic device 100 may open the gallery application. Thus, the user can conveniently view the shot photos and videos without exiting the camera application program and then starting the gallery application program. The gallery application is an application for managing pictures on electronic devices such as smart phones and tablet computers, and may also be referred to as "albums," and this embodiment does not limit the name of the application. The gallery application may support various operations, such as browsing, editing, deleting, selecting, etc., by the user on the pictures stored on the electronic device 100.
The shutter control 314 may be used to listen for user actions that trigger a photograph. The electronic device 100 may detect a user operation on the quick control 314, in response to which the electronic device 100 may save the image in the preview box 311 as a picture in the gallery application. In addition, the electronic apparatus 100 may also display a thumbnail of the saved image in the gallery shortcut key 314. That is, the user may click the shutter control 314 to trigger the taking of a picture. The shutter control 314 may be a button or other form of control, among others.
The camera flip control 315 may be used to monitor user operations that trigger flipping the camera. The electronic device 100 may detect a user operation, such as a click operation, acting on the camera flip control 315, in response to which the electronic device 100 may flip the camera, such as switching the rear camera to the front camera.
As shown in fig. 6A, when the electronic apparatus 100 detects a user operation (e.g., a click operation) for selecting the dual shot mode 312D, the electronic apparatus 100 may display the user interface 31 exemplarily shown in fig. 6B in which images from the front camera and the rear camera are simultaneously displayed in the preview frame 311. In some embodiments, the electronic device 100 may default to the dual shot mode after starting the camera application. Without limitation, the electronic device 100 may also start the dual-view photographing mode in other manners, for example, the electronic device 100 may also start the dual-view photographing mode according to a voice instruction of a user, which is not limited in this embodiment of the present application.
It can be seen that images from a plurality of cameras are simultaneously displayed in the preview frame 311 in the double shot mode, compared to the shooting mode. The preview box 311 includes two preview areas: a preview area 311A and a preview area 311B, in which the preview area 311A displays an image from a rear camera and the preview area 311B displays an image from a front camera.
Taking a double-shot photograph as an example, the principle of the photographing method provided by the embodiment of the present application is described below with reference to fig. 7A-7B.
In the double-scene shooting mode, the cameras participating in the double-scene shooting are assumed to be a front camera and a rear camera. Since the electronic device 100 needs to display the images captured by the two cameras in the preview frame 311, before the electronic device 100 displays the images, the images captured by the two cameras need to be cropped and spliced, where the electronic device 100 crops the image frame output by the rear camera into the proportion required by the preview area 311A, and crops the image frame output by the front camera into the proportion required by the preview area 311B. Then, the electronic device 100 splices the two cut images. Finally, the electronic device 100 displays the stitched image again.
The shooting method provided by the embodiment of the application can adjust the cutting position of the cut image of the electronic equipment 100, so that the image displayed in the preview area 311A or 311B is changed, and the image displayed in the preview area presents a better drawing effect.
The principle of the shooting method is described below in conjunction with two cases:
case 1: when the inclination angle of the electronic device 100 to the horizontal plane is smaller than 90 °, the subject of the photographic subject in the image captured by the rear camera is slightly close to the lower half area of the image.
In this case, the image captured by the electronic device 100 and the stitched image may refer to the schematic diagram shown in fig. 7A.
As shown in fig. 7A, image 1 is an image captured by a front camera, and image 2 is an image captured by a rear camera. Taking the image 1 as an example, the viewing range of the front camera is the area 1 where the image 1 is located, and the cropping area of the image 1 is the area 2. The electronic apparatus 100 crops the image 1 to obtain an image 1a in the cropped area. The cutting area is defined by the center point O of the image 1 1 The point is the center, and the scale and size of the point are the same as those of the region 311B, that is, the cut region is the region where the bold solid line frame is located in the image 1. Image 1a is displayed in area 311B, i.e., image 1a is the preview image in area 311B. Similarly, the electronic device 100 crops the image 2 to obtain an image 2a in the cropped area. The cutting area is defined by the center point O of the image 2 2 The dot is the center, and the scale and size of the trimming area are the same as those of the preview area 311A, that is, the area where the bold solid line frame is located in the image 2. Image 2a is displayed in preview area 311A, that is, image 2a is a preview image in preview area 311A. Finally, the electronic device 100 stitches the image 1a and the image 2a to obtain an image 3.
It can be seen that the electronic device 100 crops the images captured by the front and rear cameras in a center cropping manner, resulting in the main body of the scene in the image captured by the rear camera not being highlighted in image 3.
When the electronic apparatus 100 detects a sliding operation (e.g., a slide-up operation) of the user on the preview area where the image 2 is located, the electronic apparatus is powered onThe sub-apparatus 100 slightly moves down the position of the trimming area in the image 2, the center point of which is represented by O 2 Point change to O 2 Point, the adjusted clipping region is a region where the virtual image frame is located in the image 2, the adjusted clipping region may contain more scene bodies, and finally, the electronic device 100 splices the clipped image according to the adjusted clipping region to obtain an image 3'.
Comparing the image 3 with the image 3', it can be seen that the main body of the scenery in the image collected by the rear camera is not shown in the image 3 due to the influence of the shooting angle. The influence of a shooting angle on the view finding of the camera is counteracted by moving the position of the cutting area, a scene main body is obtained from the image 3', a better self-shooting effect is kept, and a better composition effect is achieved.
Case 2: when the inclination angle of the electronic device 100 to the horizontal plane is greater than 90 °, in the image captured by the front camera, the subject of the photographic subject may be slightly close to the lower half area of the image.
In this case, the image captured by the electronic device 100 and the stitched image may refer to the schematic diagram shown in fig. 7B.
As shown in fig. 7B, image 4 is an image captured by the front camera, and image 5 is an image captured by the rear camera. The electronic device 100 crops the image 4 to obtain an image 3a in the cropped area. The cutting area is defined by the center point O of the image 4 3 The dot is the center, and the scale and size of the region 311B are the same, that is, the cut region is the region of the bold solid frame in the image 3a. The image 3a is displayed in the area 311B, that is, the image 3a is a preview image in the area 311B. Similarly, the electronic device 100 crops the image 5 to obtain an image 4a in the cropped area. The cutting area is defined by the center point O of the image 4a 4 The point is the center, and the ratio and the size of the point are the same as those of the preview area 311A, that is, the cut-out area is the area of the image 4a where the bold solid line frame is located. Image 4a is displayed in preview area 311A, that is, image 4a is a preview image in preview area 311A. Finally, the electronic device 100 stitches the image 3a and the image 4a to obtain the image 6.
It can be seen that the electronic device 100 crops the images captured by the front camera and the rear camera in a center cropping manner, resulting in the front camera capturing an image in which the face of the person is not centered in the image 6.
When the electronic device 100 detects a sliding operation (e.g., a sliding-up operation) performed by the user on the preview area of the image 4, the electronic device 100 slightly moves the position of the cropping area in the image 4 downwards, and the center point of the cropping area is indicated by O 3 Point change to O 3 'Point, the adjusted clipping region is a region where the virtual image frame is located in the image 4, in the adjusted clipping region, the face of the person may be located at the center of the adjusted clipping region, and finally, the electronic device 100 splices an image 6' obtained from the clipped image according to the adjusted clipping region.
Comparing the image 6 with the image 6', it can be seen that the face of the person in the image 6 is not located at the center of the image 6 due to the influence of the photographing angle. The influence of the shooting angle on the view finding of the camera is counteracted by moving the position of the cutting area, the main content of the scenery is obtained in the image 6', the facial shooting details in the self-shooting are highlighted, and a better composition effect is achieved.
Some user interfaces provided by embodiments of the present application are described below in conjunction with fig. 8A-8E and 9A-9E.
In a double-scene shooting scene, when the electronic device 100 detects that the inclination angle between itself and the horizontal plane is smaller than 90 °, the electronic device 100 determines that the main body of the shooting object is located in the lower half of the image in the image collected by the rear camera.
Specifically, when the electronic device 100 detects that the tilt angle of itself with respect to the horizontal plane is between 60 ° -85 ° as shown in fig. 8A and stays for more than 3 seconds, the electronic device 100 may display the user interface 31 as shown in fig. 8B.
As shown in fig. 8B, a first prompt icon 316 appears in the preview area 311A of the user interface 31. The first prompt icon 316 is used to prompt the user, and the preview area 311A may detect a user's slide-up operation on the area, and in response to the operation, the electronic apparatus 100 may move the image displayed in the area and adjust the composition effect of the image in the area.
In one possible implementation, the first reminder icon 316 may automatically disappear after a period of time (e.g., 3 seconds). Further, if the electronic device 100 does not detect the touch operation of the user on the area where the prompt icon 317 is located all the time, and the tilt angle of the electronic device 100 is always between 60 ° and 85 °, the prompt icon 317 may appear repeatedly, for example, 3 times.
In one possible implementation, the first prompt icon 316 may disappear when the electronic device 100 detects a touch operation (e.g., a sliding operation) performed by the user on an area where the first prompt icon 316 is located. Further, after the first prompt icon 316 disappears, the first prompt icon 316 may not appear in the double-shot mode at this time.
As shown in fig. 8C, when the preview area 311A detects a touch operation (e.g., a pressing operation) by the user, the first prompt icon 316 is updated to the viewfinder presentation window 317. The view display window 317 is used for displaying a relation between an image displayed in an area (preview area 311A) currently touched by the user and an image captured by a camera (rear camera) corresponding to the area. That is, the finder display window 317 can display the positional relationship between the original image captured by the camera and the trimming area. The electronic device 100 may display the original image captured by the camera in the viewfinder presentation window 317, and display the position of the cropped area in the original image in the viewfinder presentation window 317. In this way, the electronic device 100 may present the dynamic change of the cropping area to the user, facilitating the user to perceive the relationship between the image displayed in the user interface and the image captured by the camera.
It is understood that the viewfinder presentation window 317 may be located at the upper left corner of the display screen or the upper right corner of the display screen, which is not limited in this application.
In one possible implementation, the electronic device 100 may receive and move the display position of the viewfinder presentation window 317 in response to an input operation (e.g., a press-and-drag) applied to the viewfinder presentation window 317 by a user.
As can be seen from the view finding display window 317 shown in fig. 8C, when the electronic device 100 enters the "dual-view photographing mode" for the first time, the image in the preview area 311A is a cut image of the image captured by the camera, and the cut area coincides with the central point of the area where the image originally captured by the camera is located. In addition, as can be seen from the view finding presentation window 317, in the image presented in the preview area 311A, the subject main body is located in the lower half of the image captured by the camera.
As shown in fig. 8D, the electronic apparatus 100 may detect a user's slide-up operation on the preview area 311A, and in response to the operation, move the image in the preview area 311A upward. At the same time, the cropped area displayed in the viewfinder display window 317 moves downward relative to the image originally captured by the camera.
In one possible implementation, the electronic apparatus 100 may detect a sliding operation by the user on the cut-out region shown in the finder presentation window 317, change the positional relationship between the original image captured by the camera in the finder presentation window 317 and the cut-out region in response to the operation, and simultaneously change the image displayed in the preview region 311A.
As shown in fig. 8E, when the electronic apparatus 100 no longer detects the touch operation by the user on the preview frame 311, the finder display window 316 disappears. The preview region 311A shown in fig. 8E displays more contents of the subject body and the composition effect of the image is better than the preview region 311A shown in fig. 8C.
In some embodiments, electronic device 100 may not display the reminder information, that is, electronic device 100 may only include the user interfaces shown in FIGS. 8C-8E described above.
In some embodiments, the electronic device 100 may adjust the composition effect of the image directly according to the self tilt angle without adjusting the image displayed in the preview area according to the operation of the user. Specifically, after entering the double shot mode, the electronic device 100 may automatically adjust the image in the preview area 311A (for example, the cropping area moves downward by a distance, which may be 10% of the longitudinal length of the image originally captured by the camera head) when detecting that the tilt angle of the electronic device itself is between 60 ° and 85 ° as shown in fig. 8A and stays for more than 3 seconds, and display a user interface as shown in fig. 8E. Further, before the electronic apparatus 100 automatically adjusts the image in the preview area 311A, the electronic apparatus 100 may display a prompt message prompting the user 100 whether to allow the electronic apparatus 100 to automatically adjust the composition effect, and upon receiving an operation that the user allows the adjustment of the composition effect, display a user interface as shown in fig. 8E.
Referring to the schematic diagram shown in fig. 7A, the relationship between the image displayed in the preview frame and the image captured by the camera and the change process of the cropping area in particular may be referred to.
In a double-scene shooting scene, when the electronic device 100 detects that the inclination angle between itself and the horizontal plane is greater than 90 °, the electronic device 100 determines that the main body of the shooting object is located in the lower half of the image in the image collected by the front camera.
Specifically, when the electronic device 100 detects that the tilt angle of itself with respect to the horizontal plane is between 95 ° -120 ° as shown in fig. 9A and stays for more than 3 seconds, the electronic device 100 may display the user interface 31 as shown in fig. 9B.
As shown in fig. 9B, a first prompt icon 316 appears in the preview area 311B of the user interface 31. The first prompt icon 316 is used to prompt the user, and the preview area 311B may detect a user's slide-up operation on the area, and in response to the operation, the electronic apparatus 100 may move the image displayed in the area and adjust the composition effect in the area.
As shown in fig. 9C, when the preview area 311B detects a touch operation (e.g., a pressing operation) of the user, the first prompt icon 316 is updated to a view display window 317, and the view display window 317 is used for displaying a relationship between an image displayed in an area (the preview area 311B) currently touched by the user and an image captured by a camera (a front camera) corresponding to the area.
As shown in fig. 9D, the electronic apparatus 100 may detect a slide-up operation by the user on the preview area 311B, and in response to the operation, move the image in the preview area 311B upward, and at the same time, move the cropped area displayed in the viewfinder display window 317 downward with respect to the image originally captured by the camera.
As shown in fig. 9E, when the electronic apparatus 100 no longer detects the touch operation by the user on the preview frame 311, the finder display window 317 disappears. The preview region 311B shown in fig. 9E highlights the face of the human body and the composition effect of the image is better than the preview region 311B shown in fig. 9C.
For a detailed description of the viewfinder presentation window 317 in the user interface 31, reference may be made to the description in fig. 8C, which is not repeated here.
In some embodiments, the electronic device 100 may not display the reminder information, that is, the electronic device 100 may only include the user interfaces shown in fig. 9C-9E described above.
In some embodiments, the electronic device 100 may adjust the composition effect of the image directly according to the self-tilt angle without adjusting the image displayed in the preview area according to the user's operation. Specifically, after entering the double shot mode, the electronic device 100 may automatically adjust the image in the preview area 311B (for example, the cropping area moves downward by a distance, which may be 10% of the longitudinal length of the image originally captured by the camera), when detecting that the tilt angle of the electronic device itself is between 95 ° and 120 ° as shown in fig. 9A and stays for more than 3 seconds, and display the user interface as shown in fig. 9E. Further, before the electronic device 100 automatically adjusts the image in the preview area 311B, the electronic device 100 may display a prompt message prompting the user 100 whether to allow the electronic device 100 to automatically adjust the composition effect, and upon receiving an operation that the user allows the adjustment of the composition effect, display a user interface as shown in fig. 9E.
It is understood that, when the electronic apparatus 100 displays the prompt information in a specific preview region according to the tilt angle, it is not limited whether or not the other preview regions can adjust the composition effect according to the sliding operation of the user.
Referring to the schematic diagram shown in fig. 7B, the relationship between the image displayed in the preview frame and the image captured by the camera and the change process of the cropping area of the electronic device 100 may be specifically referred to.
The overall flow of the shooting method in the embodiment of the present application is described below with reference to fig. 10.
As shown in fig. 10, the method includes:
s101, the electronic device 100 starts a camera application.
For example, the electronic device 100 may detect a touch operation (e.g., a click operation on the camera application 215) acting on the camera application 215 as shown in fig. 5 and launch the camera application in response to the operation.
S102, the electronic device 100 detects a user operation of selecting the "multi-view shooting mode".
Illustratively, the user operation may be a touch operation (e.g., a click operation) on the dual shot mode 312D shown in fig. 6A. The user operation may be other types of user operations such as a voice command.
Not limited to user selection, the electronic device 100 may default to the "two-shot mode" after the camera application is launched.
S103, the electronic device 100 starts N cameras, where N is a positive integer.
Specifically, the electronic equipment can be provided with M cameras, wherein M is more than or equal to 2, M is more than or equal to N, and M is a positive integer. The N cameras may be a combination of a front camera and a rear camera. The N cameras can also be a combination of any plurality of wide-angle cameras, ultra-wide-angle cameras, long-focus cameras or front cameras. The present application does not limit the camera combination mode of the N cameras.
The N cameras may be selected by default for the electronic device, for example, the electronic device turns on two cameras, a front camera and a rear camera, by default. The N cameras may also be user selectable, e.g. the user may select which cameras to turn on in more modes.
And S104, the electronic equipment 100 acquires images through the N cameras.
For example, when the N cameras are a front camera and a rear camera, the image captured by the front camera and the image captured by the rear camera may respectively refer to image 1 and image 2 shown in fig. 7A, or may respectively refer to image 4 and image 5 shown in fig. 7B.
S105, the electronic device 100 displays a user interface, where the user interface includes N preview areas, and the partial images acquired by the N cameras may be respectively displayed in the N preview areas. The N preview areas comprise a first area, and a first preview image is displayed in the first area. The first preview image is obtained by cropping all the images acquired by the first camera.
The images displayed in each of the N preview areas may be referred to as preview images. The preview image displayed in one preview area can be obtained by cutting all images collected by the camera corresponding to the area.
Taking the user interface shown in fig. 6B as an example, the user interface includes a preview area 311A and a preview area 311B, the preview image displayed in the preview area 311A may be obtained by cropping all images captured by the electronic device 100 from a rear camera, and the preview image displayed in the preview area 311B may be obtained by cropping all images captured by the electronic device 100 from a front camera, where N =2, and the N cameras are the rear camera and the front camera. Specifically, the center position of the preview image displayed in the preview area 311A may coincide with the center position of all images captured by the rear camera, and the center position of the preview image displayed in the preview area 311B may coincide with the center position of all images captured by the front camera. In this case, the preview images displayed in preview regions 311A and 311B are obtained by the center trimming method. The first region may be a preview region 311A, the first preview image may be an image displayed in the preview region 311A, and the first camera may be a rear camera. Alternatively, the first region may refer to the preview region 311B, the first preview image may refer to an image displayed in the preview region 311B, and the first camera may refer to a front camera.
The layout of the preview area 311A and the preview area 311B in the user interface may be various, such as a picture-in-picture manner, without being limited to the vertical split screen manner shown in fig. 6B.
S106, the electronic device 100 obtains an inclination angle of the electronic device 100 with respect to a horizontal plane.
The inclination angle of the electronic device 100 to the horizontal plane refers to an included angle between a plane on which the display screen of the electronic device 100 is located and the horizontal plane.
Specifically, the electronic apparatus 100 may detect the inclination angle of the electronic apparatus 100 with respect to the horizontal plane by a sensor such as a gyro sensor or an acceleration sensor.
In some embodiments, the electronic device 100 may also obtain the tilt angle of the electronic device 100 from the vertical.
S107, the electronic device 100 displays prompt information in one or more preview areas in the user interface.
The prompt information is used to prompt the user, and the electronic device 100 may move the image displayed in the preview area and adjust the composition effect of the image in response to a touch operation (e.g., a sliding operation) performed by the user on the preview area where the prompt information is located.
In some embodiments, when the inclination angle of the electronic device 100 to the horizontal plane is smaller than 90 °, the one or more preview areas may be a preview area corresponding to a rear camera, and the prompt information is specifically a drawing effect indicating that the user may slide up to adjust the image. For example, referring to fig. 8A-8B, the tilt angle of the electronic device 100 from the horizontal plane may refer to the tilt angle as shown in fig. 8A, the preview area may refer to the preview area 311A as shown in fig. 8B, and the prompt information at this time may refer to the first prompt information 316 as shown in fig. 8B.
In other embodiments, when the inclination angle of the electronic device 100 to the horizontal plane is greater than 90 °, the one or more preview areas may be a preview area corresponding to the front camera, and the prompt information may specifically indicate a composition effect that the user can slide up to adjust the image. For example, referring to fig. 9A-9B, the tilt angle of the electronic device 100 from the horizontal plane may be the tilt angle as shown in fig. 9A, the preview area may be the preview area 311B as shown in fig. 9B, and the prompt information at this time may be the first prompt information 316 as shown in fig. 9B.
The timing of the disappearance of the prompt message includes, but is not limited to, the following two cases:
1) Automatically disappear after a period of time
For example, the reminder may automatically disappear after appearing for 3 seconds.
Further, if the electronic apparatus 100 has not performed the following S108 and the tilt angle of the electronic apparatus 100 is always within a specific numerical range (e.g., 60 ° -85 ° or 95 ° -120 °), the prompt message may repeatedly appear, for example, 3 times.
2) Automatically disappearing when the electronic apparatus 100 executes S108
The electronic apparatus 100 may turn off the prompt when it is detected that the user adjusts the composition effect of the specific preview area according to the prompt.
3) User manual turn-off prompt message
Specifically, the prompt message may further include an icon that is operable by the user, for example, a close icon, and the electronic device 100 may detect a touch operation performed by the user on the icon and close the prompt message in response to the touch operation.
It is understood that the above S106, S107 are optional steps. For example, the electronic apparatus 100 may not perform S106 and S107, and thus, the electronic apparatus 100 may not detect the tilt angle and display the prompt information according to the tilt angle. Alternatively, only S107 is an optional step, and the electronic apparatus 100 may determine whether to execute S107 according to S106. Specifically, the electronic apparatus 100 may execute S107 when it is acquired that the tilt angle is within a certain numerical range (e.g., 60 ° -85 ° or 95 ° -120 °), or further, when the tilt angle is within a certain numerical range and stays for a certain period of time (e.g., 3 seconds) or more, execute S107. The embodiments of the present application do not limit this.
In this embodiment, the prompt information may also refer to first prompt information.
S108, the electronic device 100 detects a first user operation of a first area in the user interface.
Taking fig. 8C-8D as an example, the first region may refer to the preview region 311A, and in this case, the first preview image may refer to the preview image displayed in the preview region 311A shown in fig. 8C, the first camera may refer to a rear camera, and the first user operation may refer to a sliding operation in the preview region 311A, for example, a sliding-up operation shown in fig. 8D. The first user operation may be another type of user operation such as a voice instruction to the preview area 311A.
Taking fig. 9C-9D as an example, the first area may refer to the preview area 311B, and at this time, the first preview image may refer to the preview image displayed in the preview area 311B shown in fig. 8C, the first camera may refer to a front camera, and the first user operation may refer to a sliding operation in the preview area 311B, for example, a sliding up operation shown in fig. 9D. The first user operation may be another type of user operation such as a voice instruction for the preview region 311B.
S109, the electronic device 100 displays the second preview image in the first area, and does not display the first preview image any more. And the second preview image is obtained by cutting all the images acquired by the first camera. The position of the second preview image is different from the position of the first preview image in all the images acquired by the first camera.
Specifically, the first preview image may be obtained by cropping all the images captured by the first camera according to the first cropping area, and the second preview image may be obtained by cropping all the images captured by the first camera according to the second cropping area.
In some embodiments, the first cropping area and the second cropping area may be equally large, except that the center positions of the first cropping area and the second cropping area are different, the center position of the first cropping area coinciding with the center position of the area where all the images captured by the first camera are located, and the center position of the second cropping area being located below the center position of the first cropping area.
Taking the user interfaces shown in fig. 8C to 8E as an example, when a slide-up operation in the preview area 311A is detected, the image in the preview area 311A may move upward, the first preview image may be an image in the preview area 311A shown in fig. 8C, and the second preview image may be an image in the preview area 311A shown in fig. 8E. Compared to the first preview image and the second preview image shown in fig. 8C and 8E, respectively, the center position of the second preview image is shifted from the center position of the first preview image, and the second preview image is closer to the lower boundary of all images captured by the first camera than the first preview image. In this way, the user can change the content displayed in the preview area 311A in the image captured by the rear camera by the slide operation.
Taking the user interfaces shown in fig. 9C to 9E as an example, when a slide-down operation in the preview area 311B is detected, the image in the preview area 311B may move downward, the first preview image may be the image in the preview area 311B shown in fig. 9C, and the second preview image may be the image in the preview area 311B shown in fig. 9E. Compared to the first preview image and the second preview image shown in fig. 9C and 9E, respectively, the center position of the second preview image is shifted from the center position of the first preview image, and the second preview image is closer to the lower boundary of the entire image captured by the first camera than the first preview image. In this way, the user can change the content displayed in the preview area 311B in the image captured by the front camera through the slide operation.
In other embodiments, the center positions of the first trimming region and the second trimming region are the same, except that the first trimming region and the second trimming region may not be the same size. The first user operation at this time may refer to an operation of enlarging or reducing the angle of view. For example, when the second clipping area is larger than the first clipping area, the center positions of the first clipping area and the second clipping area are the same. Then at this point the second preview image contains more image content captured by the camera than the first preview image. Therefore, the user can obtain more picture information, the whole content of the object shot by one camera is highlighted, and the images correspondingly displayed by other cameras are not changed. When the second cutting area is smaller than the first cutting area, and the center positions of the first cutting area and the second cutting area are the same. Then at this point the second preview image contains less image content captured by the camera than the first preview image. In this way, the detail content of the object shot by one camera can be highlighted, and the images correspondingly displayed by other cameras are not changed.
In other embodiments, the first trimming region and the second trimming region are not the same size, and the center positions of the first trimming region and the second trimming region are different. The first operation at this time may refer to an operation of moving a zoom-in or zoom-out angle of view. Therefore, a user can decide which part of all images collected by one camera is used as a preview image to be displayed in a user interface according to own requirements, and the composition effect of the images is freely adjusted without changing the images correspondingly displayed by other cameras.
In some embodiments, the first area may refer to a preview area where the prompt information is displayed as described above.
In some embodiments, when the tilt angle of the electronic device 100 to the horizontal plane is less than 90 °, the first region may be a preview region corresponding to the rear camera, and when the tilt angle of the electronic device 100 to the horizontal plane is greater than 90 °, the first region may be a preview region corresponding to the front camera. Generally, when the inclination angle of the electronic device 100 to the horizontal plane is smaller than 90 °, the image composition effect in the preview region corresponding to the rear camera is poor, and when the inclination angle of the electronic device 100 to the horizontal plane is larger than 90 °, the image composition effect in the preview region corresponding to the front camera is poor, and the electronic device 100 displays prompt information in the corresponding preview region according to the inclination angle thereof, so that a user can be guided to correctly adjust the composition effect, and the user experience is improved.
It is understood that S108 is an optional step. That is, the electronic device 100 may adjust the first preview image to the second preview image according to the self-tilting angle. Specifically, when the electronic apparatus 100 detects that the tilt angle of itself is in a specific numerical range (e.g., 60 ° -85 ° or 95 ° -120 °) and stays for more than 3 seconds, the electronic apparatus 100 displays the second preview image in the first area and does not display the first preview image any more. Further, when the electronic apparatus 100 detects that the tilt angle of itself is in a specific numerical range (e.g., 60 ° -85 ° or 95 ° -120 °) and stays for 3 seconds or more, the electronic apparatus 100 automatically changes the first preview image to the second preview image. And, the larger the tilt angle of the electronic device 100 is, the farther the center position of the second preview image is from the center position of the first preview image, or in other words, the farther the second preview image is from the first preview image in all the images captured by the first camera. Therefore, the operation of the user is simplified, the electronic device 100 can adjust the composition effect of the image according to the inclination angle of the electronic device, and the experience of the user is improved. Further, after the electronic device 100 changes the image, the electronic device 100 may display a prompt message to prompt the user 100 that the electronic device 100 has automatically adjusted the composition effect currently.
In addition, in addition to the first camera related to the tilt angle of the electronic apparatus 100, the preview area corresponding to another camera of the electronic apparatus 100 may also receive an operation (for example, a slide operation) from the user, and adjust the composition effect of the image in the preview area. Specifically, the electronic device 100 may further include a second camera, where the N regions further include a second region, the second region is configured to display a part or all of the image captured by the second camera, the second region may detect a sliding operation of a user, and in response to the sliding operation, change a third preview image displayed in the second region into a fourth preview image, where the third preview image and the fourth preview image are both obtained by cropping all of the images captured by the second camera, and in all of the images captured by the second camera, a position of the third preview image is different from a position of the fourth preview image.
For the content not mentioned in the method embodiment of fig. 10, reference may be made to the UI embodiment described above, and details are not repeated here.
The embodiments of the present application can be combined arbitrarily to achieve different technical effects.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions described in the present application are generated in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, and the like, integrated with the available medium. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
In short, the above description is only an example of the technical solution of the present invention, and is not intended to limit the scope of the present invention. Any modifications, equivalents, improvements and the like made in accordance with the disclosure of the present invention should be considered as being included in the scope of the present invention.

Claims (16)

1. A shooting method is applied to electronic equipment with M cameras, wherein M is more than or equal to 2 and is a positive integer, and the method is characterized by comprising the following steps:
the electronic equipment starts a first camera and a second camera in the M cameras;
the electronic equipment displays a preview interface, wherein the preview interface comprises a first area and a second area, the first area displays partial images acquired by the first camera, and the second area displays all or partial images acquired by the second camera;
the electronic equipment acquires the inclination angle of the electronic equipment;
the electronic equipment detects a first operation in the first area, and in response to the first operation, the electronic equipment changes a first preview image displayed in the first area into a second preview image, wherein the first preview image and the second preview image are both obtained by cutting out all images collected by the first camera, and the position of the second preview image is different from that of the first preview image in all images collected by the first camera;
when the inclination angle is larger than a preset angle, the first camera is a front camera;
when the inclination angle is smaller than the preset angle, the first camera is a rear camera.
2. The method according to claim 1, wherein the first preview image is obtained by the electronic device cropping all the images captured by the first camera according to a first cropping area, and the second preview image is obtained by the electronic device cropping all the images captured by the first camera according to a second cropping area, wherein a center position of the first cropping area coincides with a center position of an area where all the images captured by the first camera are located, and a center position of the second cropping area is located below the center position of the first cropping area.
3. The method of claim 2, wherein the first crop area is as large as the second crop area.
4. The method according to any one of claims 1-3, wherein the tilt angle of the electronic device comprises any one of: the inclination angle of the electronic equipment and the horizontal plane and the inclination angle of the electronic equipment and the vertical plane.
5. The method according to any one of claims 1 to 4, wherein the predetermined angle is 90 degrees.
6. The method according to any one of claims 2 to 5, wherein the first operation includes a sliding operation, and a direction in which the center position of the second trimming area points to the center position of the first trimming area in all the images captured by the first camera is the same as a sliding direction of the sliding operation.
7. The method of any of claims 1-6, wherein the electronic device detects a first operation in the first region, and wherein the method further comprises:
and the electronic equipment displays first prompt information in the first area, wherein the first prompt information is used for prompting a user to change the range of the partial or whole image acquired by the first camera displayed in the first area.
8. The method of any of claims 1-7, wherein the preview interface further comprises a first window, the first window being located within the first region; the method further comprises the following steps:
the electronic equipment displays all images acquired by a camera of the electronic equipment in real time in the first window;
when the electronic equipment displays the second preview image, the electronic equipment displays a second cutting area in the first window, wherein the second cutting area is an area of the second preview image in all images acquired by the first camera.
9. The method of any of claims 1-8, wherein after the electronic device changes the first preview image displayed in the first area to the second preview image, the method further comprises:
and the electronic equipment receives a second operation for photographing, responds to the second operation, and stores the image displayed in the preview interface as a picture, wherein the picture comprises the second preview image.
10. The method of any one of claims 1 to 9,
the electronic equipment receives a third operation for recording the video;
responding to the third operation, the electronic equipment starts to record videos and displays a shooting interface, and the shooting interface comprises the N areas.
11. The method of claim 10,
the electronic equipment detects a fourth operation acting on the video recording stopping device;
and responding to the fourth operation, stopping recording the video by the electronic equipment, and generating a video file.
12. The method according to any one of claims 1-11, further comprising:
the electronic device displays a third preview image in the second area,
the electronic equipment receives a fifth operation acting on the second area;
in response to the fifth operation, the electronic device displays a fourth preview image in the second area;
the third preview image and the fourth preview image are obtained by cutting all images acquired by the second camera, and the position of the third preview image is different from the position of the fourth preview image in all the images acquired by the second camera.
13. An electronic device, comprising: a display screen, M cameras, a touch sensor, a memory, one or more processors, a plurality of applications, and one or more programs; m is not less than 2 and is a positive integer; wherein the one or more programs are stored in the memory; wherein the one or more processors, when executing the one or more programs, cause the electronic device to implement the method of any of claims 1-12.
14. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the computer program, causes the computer device to carry out the method according to any one of claims 1 to 12.
15. A computer program product comprising instructions for causing an electronic device to perform the method of any one of claims 1 to 12 when the computer program product is run on the electronic device.
16. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-12.
CN202110608969.7A 2021-06-01 2021-06-01 Shooting method, user interface and electronic equipment Active CN115442509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110608969.7A CN115442509B (en) 2021-06-01 2021-06-01 Shooting method, user interface and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110608969.7A CN115442509B (en) 2021-06-01 2021-06-01 Shooting method, user interface and electronic equipment

Publications (2)

Publication Number Publication Date
CN115442509A true CN115442509A (en) 2022-12-06
CN115442509B CN115442509B (en) 2023-10-13

Family

ID=84240581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110608969.7A Active CN115442509B (en) 2021-06-01 2021-06-01 Shooting method, user interface and electronic equipment

Country Status (1)

Country Link
CN (1) CN115442509B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119276A (en) * 2023-04-21 2023-11-24 荣耀终端有限公司 Underwater shooting method and electronic equipment
CN117714846A (en) * 2023-07-12 2024-03-15 荣耀终端有限公司 Control method and device for camera

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006311276A (en) * 2005-04-28 2006-11-09 Konica Minolta Photo Imaging Inc Picture photographing device
JP2009171428A (en) * 2008-01-18 2009-07-30 Nec Corp Control method and program for digital camera apparatus and electronic zoom
CN102323829A (en) * 2011-07-29 2012-01-18 青岛海信电器股份有限公司 Display screen visual angle regulating method and display device
US20150109507A1 (en) * 2013-01-22 2015-04-23 Huawei Device Co., Ltd. Image Presentation Method and Apparatus, and Terminal
CN105282441A (en) * 2015-09-29 2016-01-27 小米科技有限责任公司 Photographing method and device
CN105430269A (en) * 2015-12-10 2016-03-23 广东欧珀移动通信有限公司 Shooting method and apparatus applied to mobile terminal
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN108462827A (en) * 2018-02-08 2018-08-28 北京金山软件有限公司 A kind of method and device obtaining image data
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN110225252A (en) * 2019-06-11 2019-09-10 Oppo广东移动通信有限公司 Camera control method and Related product
CN110673782A (en) * 2019-08-29 2020-01-10 华为技术有限公司 Control method applied to screen projection scene and related equipment
CN112714255A (en) * 2020-12-30 2021-04-27 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006311276A (en) * 2005-04-28 2006-11-09 Konica Minolta Photo Imaging Inc Picture photographing device
JP2009171428A (en) * 2008-01-18 2009-07-30 Nec Corp Control method and program for digital camera apparatus and electronic zoom
CN102323829A (en) * 2011-07-29 2012-01-18 青岛海信电器股份有限公司 Display screen visual angle regulating method and display device
US20150109507A1 (en) * 2013-01-22 2015-04-23 Huawei Device Co., Ltd. Image Presentation Method and Apparatus, and Terminal
CN105282441A (en) * 2015-09-29 2016-01-27 小米科技有限责任公司 Photographing method and device
CN105430269A (en) * 2015-12-10 2016-03-23 广东欧珀移动通信有限公司 Shooting method and apparatus applied to mobile terminal
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN108462827A (en) * 2018-02-08 2018-08-28 北京金山软件有限公司 A kind of method and device obtaining image data
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN110225252A (en) * 2019-06-11 2019-09-10 Oppo广东移动通信有限公司 Camera control method and Related product
CN110673782A (en) * 2019-08-29 2020-01-10 华为技术有限公司 Control method applied to screen projection scene and related equipment
CN112714255A (en) * 2020-12-30 2021-04-27 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119276A (en) * 2023-04-21 2023-11-24 荣耀终端有限公司 Underwater shooting method and electronic equipment
CN117714846A (en) * 2023-07-12 2024-03-15 荣耀终端有限公司 Control method and device for camera

Also Published As

Publication number Publication date
CN115442509B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
WO2021093793A1 (en) Capturing method and electronic device
JP7450035B2 (en) Video shooting methods and electronic equipment
WO2022068537A1 (en) Image processing method and related apparatus
WO2021147482A1 (en) Telephoto photographing method and electronic device
WO2021213477A1 (en) Viewfinding method for multichannel video recording, graphic user interface, and electronic device
CN111526314B (en) Video shooting method and electronic equipment
US20230353862A1 (en) Image capture method, graphic user interface, and electronic device
CN114845059B (en) Shooting method and related equipment
CN115442509B (en) Shooting method, user interface and electronic equipment
WO2024041394A1 (en) Photographing method and related apparatus
US20230377306A1 (en) Video Shooting Method and Electronic Device
CN115484387A (en) Prompting method and electronic equipment
CN116055861B (en) Video editing method and electronic equipment
CN116055867B (en) Shooting method and electronic equipment
WO2023160224A9 (en) Photographing method and related device
WO2023231696A1 (en) Photographing method and related device
WO2022262453A1 (en) Abnormality prompting method and electronic device
CN115484392B (en) Video shooting method and electronic equipment
US20240064397A1 (en) Video Shooting Method and Electronic Device
US20240007736A1 (en) Photographing method and electronic device
CN115811656A (en) Video shooting method and electronic equipment
CN117221709A (en) Shooting method and related electronic equipment
CN117221708A (en) Shooting method and related electronic equipment
CN117459825A (en) Shooting method and electronic equipment
CN117221743A (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant