WO2021213477A1 - 多路录像的取景方法、图形用户界面及电子设备 - Google Patents

多路录像的取景方法、图形用户界面及电子设备 Download PDF

Info

Publication number
WO2021213477A1
WO2021213477A1 PCT/CN2021/089075 CN2021089075W WO2021213477A1 WO 2021213477 A1 WO2021213477 A1 WO 2021213477A1 CN 2021089075 W CN2021089075 W CN 2021089075W WO 2021213477 A1 WO2021213477 A1 WO 2021213477A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
camera
area
image
preview
Prior art date
Application number
PCT/CN2021/089075
Other languages
English (en)
French (fr)
Inventor
崔瀚涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US17/920,601 priority Critical patent/US11832022B2/en
Priority to BR112022021413A priority patent/BR112022021413A2/pt
Priority to EP21792314.3A priority patent/EP4131926A4/en
Publication of WO2021213477A1 publication Critical patent/WO2021213477A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of electronic technology, in particular to a viewfinder method, a graphical user interface and an electronic device applied to multi-channel video recording.
  • portable electronic devices such as mobile phones, tablet computers, etc.
  • multiple cameras such as a front camera, a wide-angle camera, and a telephoto camera.
  • more and more electronic devices can support multiple cameras to shoot at the same time.
  • the purpose of this application is to provide a multi-channel video viewfinder method, a graphical user interface (GUI) and electronic equipment, which enable the user to adjust the viewfinder of each working camera in the preview box during multi-channel shooting. It can be realized that the viewfinder of each work camera in the preview box does not affect each other, and there will be no problem that the viewfinder of other work cameras in the preview box will also change at any time due to the change of the viewfinder of a certain work camera in the preview box.
  • GUI graphical user interface
  • a multi-channel video viewfinder method is provided.
  • the method is applied to an electronic device with a display screen and M cameras, where M ⁇ 2, and M is a positive integer.
  • the method includes: the electronic device turns on N cameras, N ⁇ M, N is a positive integer; the electronic device collects images through the N cameras; the electronic device displays a preview interface and part or all of the images collected by each of the N cameras; the preview interface includes N areas, the N Part or all of the images collected by the cameras are respectively displayed in the N areas; the electronic device detects the first user operation in the first area, the first area is one of the N areas, and the first preview image Displayed in the first area, the first preview image is obtained by cropping all the images collected by the first camera; the electronic device displays a second preview image in the first area, and the second preview image is also obtained by cropping The position of the second preview image is different from the position of the first preview image among all the images collected by the first camera; the electronic device detects a second user operation; the The electronic device starts to
  • the implementation of the method provided in the first aspect allows the user to adjust the viewfinder of each working camera in the preview frame through user operation during the preview process of multi-channel video recording, so that the viewfinder of each work camera in the preview frame does not affect each other.
  • the first camera may be a rear camera or a front camera.
  • the center position of the first preview image may coincide with the center position of all the images collected by the first camera.
  • the first preview image is obtained through a center cropping method.
  • the size of the first preview image may be the same size as the size of the first area under the one-fold magnification of the first camera.
  • the first user operation includes a sliding operation. For example, left swipe operation, right swipe operation, and so on.
  • the direction in which the center position of the first preview image points to the center position of the second preview image is opposite to the sliding direction of the sliding operation. In this way, the user can change the viewing range presented by the first camera in the first area through a sliding operation.
  • the second preview image is closer to the right boundary of all images collected by the first camera than the first preview image.
  • the second preview image is closer to the left boundary of all the images collected by the first camera than the first preview image.
  • the center position of the first preview image coincides with the center position of all images collected by the first camera.
  • the electronic device can crop all the images collected by the first camera in a center cropping manner to obtain the first preview image.
  • the second preview image and the first preview image may be the same size. That is to say, before and after the user adjusts the viewfinder of the camera through the sliding operation, the electronic device does not change the size of the cropped area in all the images collected by the first camera.
  • the electronic device before detecting the first user operation, the electronic device also detects a third user operation; the electronic device enlarges the first preview image, and displays the The enlarged first preview image is displayed in an area.
  • the first user operation may be a sliding operation
  • the third user operation may be a two-finger zoom-in operation. In this way, the electronic device can individually adjust the viewing range presented by a certain camera in the preview interface in a zoom scene without affecting the viewing range presented by other cameras in the preview interface.
  • the second user operation is a user operation instructing to start recording a video, for example, a click operation on a shooting control.
  • the electronic device may also detect a fourth user operation in the first area of the shooting interface; the electronic device displays the fourth user operation in the first area of the shooting interface
  • the third preview image of the first camera, the third preview image is obtained by cropping all the images collected by the first camera, and among all the images collected by the first camera, the position of the third preview image is different from that of the first camera. 2.
  • the location of the preview image is also detected.
  • the user can then adjust the viewfinder range presented by the camera in the shooting interface through user operations.
  • the fourth operation may be a sliding operation.
  • the electronic device when the electronic device detects the first user operation, if the posture of the electronic device has not changed, then display the first camera in the first area A second preview image; when the first user operation is detected, if the posture of the electronic device changes, the electronic device displays a fourth preview image of the first camera in the first area, the fourth preview image It is obtained by cropping all the images collected by the first camera, and the center position of the fourth preview image coincides with the center position of all the viewfinder images of the first camera.
  • the electronic device when the posture of the electronic device has not changed, the electronic device will adjust the viewing range of the camera in the preview interface according to the first user operation.
  • the electronic device may not adjust the viewing range of the camera in the preview interface according to the first user operation detected at this time, so that the user can adjust the electronic device Posture to change the optical framing.
  • the electronic device can detect that all images collected by the first camera include an image of the first face; the electronic device displays a fifth preview in the first area Image, the fifth preview image is obtained by cropping all the images collected by the first camera, the fifth preview image includes the image of the first face; the electronic device detects that the image of the first face is in the first The position in all the images captured by a camera changes; the electronic device displays a sixth preview image in the first area, the sixth preview image is obtained by cropping all the images captured by the first camera, the sixth preview The image includes an image of the first human face.
  • the multi-channel video framing method provided by the embodiment of the present application can also provide a face tracking function, so that a certain area in the preview interface always displays a preview image containing a human face.
  • the position of the image of the first human face in the sixth preview image is the same as the position of the image of the first human face in the fifth preview image.
  • the image of the first human face is in the central area of the fifth preview image.
  • the electronic device may also detect that all the images collected by the first camera include the image of the first face; and activate the second camera, the viewfinder of the second camera The range is larger than the viewing range of the first camera, and the first face is within the viewing range of the second camera; the electronic device displays a seventh preview image in the first area, and the seventh preview image is obtained by cropping the second camera.
  • the seventh preview image includes the image of the first face; the electronic device detects that the position of the image of the first face in all the images collected by the second camera has changed; The electronic device displays an eighth preview image in the first area, the eighth preview image is obtained by cropping all images collected by the second camera, and the eighth preview image includes an image of the first human face. In this way, the viewing range corresponding to a certain preview area can be expanded during face tracking.
  • the position of the image of the first human face in the seventh preview image is the same as the position of the image of the first human face in the eighth preview image.
  • the image of the first human face is in the central area of the seventh preview image.
  • the first camera is a front camera or a rear camera.
  • the functions of face tracking and sliding adjustment of the viewfinder range of the camera provided by the embodiments of the present application can be applied to front shooting scenes and rear shooting scenes.
  • the electronic device may also detect the fifth user operation; the electronic device stops recording the video and generates a video file; the electronic device detects the sixth user operation for the video file User operation; the electronic device displays a playback interface, and the playback interface includes the N areas.
  • the user can save the desired preview image after adjusting the preview image of each area according to his own needs, so that the user can obtain a more flexible and convenient video recording experience.
  • the fifth user operation is a user operation instructing to stop recording the video, for example, it may be a click operation on the shooting control.
  • the embodiment of the present application provides a multi-channel video viewfinder method, which is applied to an electronic device with a display screen and M cameras, M ⁇ 2, M is a positive integer, and the method includes: the electronic device turns on N Cameras, N ⁇ M, where N is a positive integer; the electronic device collects images through the N cameras; the electronic device displays a preview interface and part or all of the images collected by the N cameras.
  • the preview interface includes N areas, Part or all of the images collected by each of the N cameras are respectively displayed in the N areas; the electronic device detects the seventh user operation in the first area; the electronic device detects that the posture of the electronic device has changed; the electronic device The device displays a ninth preview image in the first area, and the viewing range presented by the ninth preview image is the same as the viewing range presented by the tenth preview image. The tenth preview image is displayed before the posture of the electronic device changes.
  • the image in the first area, the ninth preview image is obtained by cropping all the images collected by the first camera after the posture of the electronic device is changed, and the tenth preview image is generated in the posture of the electronic device Before the change, it is obtained by cropping all the images collected by the first camera; the electronic device detects an eighth user operation; the electronic device starts to record a video, and displays a shooting interface, the shooting interface includes the N areas.
  • the seventh user operation may be a user operation of selecting the first area, for example, a double-click operation, a long press operation, etc., acting on the first area.
  • the method provided in the second aspect may not affect the viewing range of the selected preview area when the posture of the electronic device is changed.
  • the embodiments of the present application provide a viewfinder method for multi-channel photography.
  • the method is applied to an electronic device with a display screen and M cameras, where M ⁇ 2 and M is a positive integer.
  • the method includes: the electronic device Turn on N cameras, N ⁇ M, where N is a positive integer; the electronic device collects images through the N cameras; the electronic device displays a preview interface and part or all of the images collected by the N cameras, and the preview interface includes N Area, part or all of the images collected by each of the N cameras are respectively displayed in the N areas; the electronic device detects the first user operation in the first area, and the first area is one of the N areas ,
  • the first preview image is displayed in the first area, and the first preview image is obtained by cropping all the images collected by the first camera; the electronic device displays the second preview image in the first area, and the second preview image is displayed in the first area.
  • the preview image is also obtained by cropping all the images collected by the first camera. In all the images collected by the first camera, the position of the
  • the implementation of the method provided in the second aspect can allow the user to adjust the viewfinder of each working camera in the preview frame through user operation during the preview process of multi-channel photography, and realize that the viewfinder of each work camera in the preview frame does not affect each other.
  • an electronic device may include M cameras, a display screen, a touch sensor, a wireless communication module, a memory, and one or more processors.
  • the one or more processors are used to execute storage One or more computer programs in the aforementioned memory, where: M ⁇ 2, M is a positive integer,
  • N cameras are used to collect images
  • the display screen can be used to display a preview interface and part or all of the images collected by each of the N cameras, the preview interface includes N areas, and part or all of the images collected by each of the N cameras are respectively displayed in the N areas;
  • the touch sensor can be used to detect the first user operation in the first area, the first area is one of the N areas, the first preview image is displayed in the first area, and the first preview image is cropped Obtained from all the images collected by the first camera;
  • the display screen can be used to respond to the first user operation and display a second preview image in the first area.
  • the second preview image is also obtained by cropping all the images collected by the first camera. In the image, the position of the second preview image is different from the position of the first preview image;
  • the touch sensor can also be used to detect a second user's operation
  • the N cameras can be used to respond to the second user's operation to start recording video
  • the display screen can be used to respond to the second user's operation to display a shooting interface, and the shooting interface includes the N areas.
  • an electronic device is also provided, which may include a device that can implement any possible implementation manner in the first aspect or any possible implementation manner in the second aspect.
  • a video recording device which has the function of realizing the behavior of the electronic device in the foregoing method.
  • the above-mentioned functions can be realized by hardware, or by hardware executing corresponding software.
  • the above-mentioned hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • a computer device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to enable the computer device to implement Such as any possible implementation manner in the first aspect, or as any possible implementation manner in the second aspect.
  • a computer program product containing instructions is characterized in that, when the computer program product is run on an electronic device, the electronic device is caused to execute any possible implementation manner in the first aspect, or as the second Any possible implementation in the aspect.
  • a computer-readable storage medium including instructions, characterized in that, when the foregoing instructions are executed on an electronic device, the electronic device is caused to execute any possible implementation manner in the first aspect, or as described in the first aspect. Any possible implementation of the two aspects.
  • FIG. 1 is a schematic diagram of the structure of an electronic device provided by an embodiment
  • 2A is a schematic diagram of a user interface for application menus on an electronic device provided by an embodiment
  • 2B is a schematic diagram of a rear camera on an electronic device provided by an embodiment
  • FIGS. 3A-3D are schematic diagrams of two-channel video recording scenes involved in this application.
  • Fig. 4A is a schematic diagram of the working principle of dual-channel video recording
  • FIG. 4B is a schematic diagram of image cropping of an existing dual-channel video recording
  • Figure 5 is a schematic diagram of a scene of multi-channel video recording
  • FIG. 6A-FIG. 6B, FIG. 6D-FIG. 6E are schematic diagrams of UI for adjusting the preview image displayed in each area during the preview process of multi-channel video recording according to an embodiment
  • 6C and 6F are schematic diagrams of cropping images when adjusting the preview images displayed in each area during the preview process of multi-channel video recording according to an embodiment
  • 7A-7B, and 7D-7E are schematic diagrams of UI for adjusting the preview image displayed in each area during the preview process of multi-channel video recording provided by another embodiment
  • 7C and 7F are schematic diagrams of cropping images when adjusting the preview images displayed in each area during the preview process of multi-channel video recording according to another embodiment
  • 8A and 8B are schematic diagrams of cropping images when adjusting the preview images displayed in each area during the preview process of multi-channel video recording according to another embodiment
  • 9A-9C are UI schematic diagrams for adjusting the preview images displayed in each area during the preview process of multi-channel video recording provided by another embodiment
  • 9D and 9E are schematic diagrams of cropping images when adjusting the preview images displayed in each area during the preview process of multi-channel video recording according to another embodiment
  • 10A and 10B are schematic diagrams of a UI that prompts the user to locate the location of the preview image displayed in each area during the preview process of the multi-channel video recording according to an embodiment
  • 11A-11F are schematic diagrams of UI for adjusting the preview images displayed in each area during the recording process of multi-channel recording according to an embodiment
  • 12A-12B are schematic diagrams of a UI for adjusting the preview image displayed in each area by moving the electronic device during the preview process of multi-channel video recording according to an embodiment
  • 13A-13B are schematic diagrams of UI for adjusting the preview images displayed in each area during the preview process of multi-channel photography provided by an embodiment
  • FIG. 15 is a schematic flowchart of a method for framing multiple video recordings according to an embodiment.
  • the present application provides a viewfinder method for multi-channel video recording, which can be applied to an electronic device including multiple cameras.
  • the electronic device can use multiple cameras to take photos or videos at the same time to obtain multiple images and richer picture information.
  • the electronic device can also support the user to adjust the framing of each working camera in its corresponding preview area during multi-channel photography or video recording. It can realize that the framing of each working camera in their corresponding preview area does not affect each other. There may be a problem that the viewfinder of a certain working camera in the corresponding preview area changes and the viewfinder of other working cameras in the corresponding preview area also changes at any time.
  • the viewing range (also called field of view, FOV) of a camera is determined by the design of the optical system of the camera. For example, a wide-angle camera has a larger viewing range.
  • the user can adjust the viewfinder of the camera by moving the electronic device.
  • the viewfinder of a camera in its corresponding preview area can be adjusted by a user operation (such as a left-right sliding operation) acting on the preview area.
  • the viewfinder of a camera in its corresponding preview area is the content displayed in the corresponding preview area.
  • a camera is used to display part or all of the images from the camera in the corresponding preview area.
  • the preview image displayed by a camera in the corresponding preview area is specifically an image in a cropped area in the image captured by the camera, that is, the preview image displayed in the preview area is obtained by cropping the image captured by the camera .
  • multi-channel shooting may include multi-channel video recording and multi-channel photography.
  • the electronic device can provide 2 multi-channel shooting modes: multi-channel video mode and multi-channel camera mode.
  • the multi-channel video recording mode may refer to that multiple cameras in the electronic device, such as a front camera and a rear camera, can simultaneously record multiple channels of video.
  • the display screen can simultaneously display multiple images from these multiple cameras on the same interface. These multiple images can be displayed in splicing on the same interface, or displayed in a picture-in-picture manner. This display mode will be described in detail in subsequent embodiments.
  • the multiple images can be saved as multiple videos in a gallery (also called an album), or a composite video formed by splicing these multiple videos.
  • “recording” can also be referred to as “recording video”.
  • “recording” and “recording video” have the same meaning.
  • the multi-channel photographing mode may refer to that multiple cameras in the electronic device, such as a front camera and a rear camera, can take multiple pictures at the same time.
  • the display screen can simultaneously display multiple frames of images from the multiple cameras in the viewfinder frame (also known as the preview frame).
  • the multi-frame images can be displayed in a spliced view frame, or displayed in a picture-in-picture manner.
  • the multi-frame images can be saved as multiple pictures in a gallery (also called an album), or a composite image formed by splicing the multiple frames of images.
  • the image from the camera displayed in the preview box is obtained by cropping the image collected by the camera.
  • the tailoring method refer to the description of the subsequent embodiments.
  • Multi-channel camera mode and “multi-channel video mode” are just some of the names used in the embodiments of this application, and their representative meanings have been recorded in the embodiments of this application, and their names do not constitute any limitation to this embodiment.
  • the electronic devices are mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, ultra-mobile personal computers, UMPC), netbooks, personal digital assistants (personal digital assistants, PDAs), or special cameras (such as single-lens reflex cameras, card cameras), etc.
  • AR augmented reality
  • VR virtual reality
  • PDA personal digital assistants
  • special cameras such as single-lens reflex cameras, card cameras
  • Fig. 1 exemplarily shows the structure of the electronic device.
  • the electronic device 100 may have multiple cameras 193, such as a front camera, a wide-angle camera, an ultra-wide-angle camera, a telephoto camera, and the like.
  • the electronic device 100 may also include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, and an antenna 1.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, and an antenna 1.
  • USB universal serial bus
  • Antenna 2 mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor ( image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • a processor 110 such as a controller or GPU can be used to combine multiple frames of images captured by multiple cameras 193 at the same time in a multi-channel shooting scene, and display them in the viewfinder frame by stitching or partial superposition. Preview images in the image, so that the electronic device 100 can display the images collected by the multiple cameras 193 at the same time.
  • the processor 110 such as a controller or GPU, can also be used to perform anti-shake processing on the images collected by each camera 193 in a multi-channel shooting scene, and then combine multiple cameras 193 corresponding to The images after anti-shake processing are combined.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface can include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuitsound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous receiver /transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and/or Universal serial bus (USB) interface, etc.
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal serial bus
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies can include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), and broadband code division multiple access. Address (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and / Or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • CDMA broadband code division multiple access
  • Address wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • BT long term evolution
  • BT long term evolution
  • BT long term evolution
  • BT long term evolution
  • BT long term evolution
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, and the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers, and so on.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 can determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 100 may receive key input, and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • touch operations applied to different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminding, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support one or more SIM card interfaces.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, and the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, which is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. It is not limited to being integrated in the processor 110, and the ISP may also be provided in the camera 193.
  • the number of cameras 193 may be M, M ⁇ 2, and M is a positive integer.
  • the number of cameras opened by the electronic device 100 in the multi-channel shooting may be N, where N ⁇ M, and N is a positive integer.
  • the camera activated by the electronic device 100 during multi-channel shooting may also be referred to as a working camera.
  • the camera 193 includes a lens and a photosensitive element (also referred to as an image sensor) for capturing still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal, such as standard RGB, YUV and other format image signals.
  • the hardware configuration and physical location of the camera 193 may be different. Therefore, the size, range, content, or definition of the images collected by different cameras may be different.
  • the output size of the camera 193 can be different or the same.
  • the output size of a camera refers to the length and width of the image collected by the camera.
  • the length and width of the image can be measured by the number of pixels.
  • the output size of the camera can also be called image size, image size, pixel size, or image resolution.
  • Common camera output ratios can include 4:3, 16:9 or 3:2 and so on.
  • the output ratio refers to the approximate ratio of the number of pixels in the length and width of the image captured by the camera.
  • the camera 193 can correspond to the same focal length, or can correspond to different focal lengths.
  • the focal length may include, but is not limited to: a first focal length having a focal length less than a preset value of 1 (for example, 20mm); a second focal length having a focal length greater than or equal to the preset value 1 and less than or equal to a preset value of 2 (eg 50mm) ;
  • the camera corresponding to the first focal length can be referred to as an ultra-wide-angle camera
  • the camera corresponding to the second focal length can be referred to as a wide-angle camera
  • the camera corresponding to the third focal length can be referred to as a telephoto camera.
  • the larger the focal length of the camera the smaller the field of view (FOV) of the camera.
  • the field of view refers to the range of angles that the optical system can image.
  • the camera 193 can be installed on both sides of the electronic device.
  • a camera located on the same plane as the display screen 194 of the electronic device may be called a front camera, and a camera located on the plane of the back cover of the electronic device may be called a rear camera.
  • the front camera may be used to capture an image of the photographer himself facing the display screen 194, and the rear camera may be used to capture an image of the subject (such as a person, a landscape, etc.) facing the photographer.
  • the camera 193 may be used to collect depth data.
  • the camera 193 may have a (time of flight, TOF) 3D sensing module or a structured light (structured light) 3D sensing module for acquiring depth information.
  • the camera used to collect depth data can be a front camera or a rear camera.
  • Video codecs are used to compress or decompress digital images.
  • the electronic device 100 may support one or more image codecs. In this way, the electronic device 100 can open or save pictures or videos in multiple encoding formats.
  • the electronic device 100 can implement a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or more display screens 194.
  • the display screen 194 can display multi-channel images from multiple cameras 193 through splicing or picture-in-picture, so that the multi-channel images from the multiple cameras 193 can be displayed. Simultaneously presented to the user.
  • the processor 110 may synthesize multiple frames of images from multiple cameras 193.
  • the video encoder in the processor 110 may encode the synthesized one video stream data to generate a video file.
  • each frame of image in the video file may contain multiple images from multiple cameras 193.
  • the display screen 194 can display multiple images from multiple cameras 193 to show the user multiple images of different scopes, different definitions or different details at the same time or in the same scene. Image screen.
  • the processor 110 may associate image frames from different cameras 193 respectively, so that when the captured pictures or videos are played, the display screen 194 may associate the associated image frames Also displayed in the viewfinder.
  • videos recorded by different cameras 193 at the same time may be stored as different videos, and pictures recorded by different cameras 193 at the same time may be stored as different pictures.
  • multiple cameras 193 may use the same frame rate to capture images respectively, that is, the number of image frames captured by multiple cameras 193 in the same time is the same.
  • the videos from different cameras 193 can be stored as different video files, and the different video files are related to each other.
  • the image frames are stored in the video file according to the sequence of acquiring the image frames, and the different video files include the same number of image frames.
  • the display screen 194 can display the image frames in the associated video file according to the order of the image frames included in the associated video file according to the preset or the layout mode indicated by the user, so that the corresponding video files in the same order can be displayed in different video files. Multiple frames of images are displayed on the same interface.
  • multiple cameras 193 may use the same frame rate to capture images respectively, that is, the number of image frames captured by multiple cameras 193 in the same time is the same.
  • the processor 110 can respectively time stamp each frame of images from different cameras 193, so that when the recorded video is played, the display screen 194 can simultaneously display multiple frames of images from multiple cameras 193 on the same time stamp. Interface.
  • the image from the camera displayed in the preview box of the electronic device is obtained by cropping the image collected by the camera.
  • the electronic device crops the image collected by the camera reference may be made to the description of the subsequent embodiment.
  • the electronic device usually takes pictures in the user's hand-held mode, and the user's hand-held mode usually causes the picture obtained by shooting to shake.
  • the processor 110 may perform anti-shake processing on the image frames collected by different cameras 193 respectively. Then, the display screen 194 displays the image after the anti-shake processing.
  • the following describes an exemplary user interface for application menus on the electronic device 100.
  • FIG. 2A exemplarily shows an exemplary user interface 21 for an application menu on the electronic device 100.
  • the electronic device 100 may be configured with a plurality of cameras 193, and the plurality of cameras 193 may include a front camera and a rear camera.
  • the front camera may also be multiple, for example, the front camera 193-1 and the front camera 193-2.
  • the front camera 193-1 and the front camera 193-2 may be disposed on the top of the electronic device 100, such as the "bangs" position of the electronic device 100 (ie, the area AA shown in FIG. 2A).
  • the area AA may also include an illuminator 197 (not shown in FIG.
  • a rear camera 193 and an illuminator 197 may also be configured on the back of the electronic device 100.
  • the rear camera 193 may be multiple, such as a rear wide-angle camera 193-3, a rear ultra-wide-angle camera 193-4, and a rear telephoto camera 193-5.
  • the user interface 21 may include: a status bar 201, a tray 223 with icons of commonly used applications, a calendar indicator 203, a weather indicator 205, a navigation bar 225, and other application icons. in:
  • the status bar 201 may include: one or more signal strength indicators 201-1 of a mobile communication signal (also called a cellular signal), an indicator 201-2 of an operator of the mobile communication signal, a time indicator 201-3, Battery status indicator 201-4 etc.
  • the calendar indicator 203 can be used to indicate the current time, such as date, day of the week, hour and minute information, and so on.
  • the weather indicator 205 can be used to indicate the type of weather, such as cloudy to clear, light rain, etc., and can also be used to indicate information such as temperature.
  • the tray 223 with icons of commonly used application programs can display: a phone icon 223-1, a short message icon 223-2, a contact icon 221-4, and so on.
  • the navigation bar 225 may include system navigation keys such as a return button 225-1, a main interface (Gome screen) button 225-3, and a call-out task history button 225-5.
  • system navigation keys such as a return button 225-1, a main interface (Gome screen) button 225-3, and a call-out task history button 225-5.
  • the electronic device 100 may display the previous page of the current page.
  • the main interface button 225-3 the electronic device 100 may display the main interface.
  • the electronic device 100 may display the task recently opened by the user.
  • the naming of each navigation key can also be other, which is not limited in this application. Not limited to virtual keys, each navigation key in the navigation bar 225 can also be implemented as a physical key.
  • Other application icons can be for example: Wechat icon 211, QQ icon 212, Twitter icon 213, Facebook icon 214, mailbox icon 215, cloud sharing icon 216, memo Icon 217, settings icon 218, gallery icon 219, camera icon 220.
  • the user interface 21 may also include a page indicator 221.
  • the icons of other applications may be distributed on multiple pages, and the page indicator 221 may be used to indicate the application in which page the user is currently browsing.
  • the user can swipe the area of other application icons left and right to browse application icons in other pages.
  • the electronic device 100 may display the user interface of the application.
  • the user interface 21 exemplarily shown in FIG. 2A may be a main interface (Gome screen).
  • the electronic device 100 may also include a home button.
  • the main screen key can be a physical key or a virtual key (such as key 225-3).
  • the home screen key can be used to receive instructions from the user and return the currently displayed UI to the home interface, so that it is convenient for the user to view the home screen at any time.
  • FIG. 2A only exemplarily shows the user interface on the electronic device 100, and should not constitute a limitation to the embodiment of the present application.
  • the electronic device can detect a touch operation (such as a click operation on the icon 220) acting on the icon 220 of the camera, and in response to this operation, it can display the user interface 31 exemplarily shown in FIG. 3B.
  • the user interface 31 may be a user interface of the default photographing mode of the “camera” application, and may be used by the user to take photographs through the default rear camera.
  • "Camera” is an image capture application on smart phones, tablet computers and other electronic devices. This application does not restrict the name of the application.
  • the user can click the icon 220 to open the user interface 31 of the “camera”.
  • the user can also open the user interface 31 in other applications, for example, the user clicks the shooting control in "WeChat” to open the user interface 31.
  • “WeChat” is a social application that allows users to share photos taken with others.
  • FIG. 3B exemplarily shows a user interface 31 of the "camera” application on an electronic device such as a smart phone.
  • the user interface 31 may include: an area 301, a shooting mode list 302, a control 303, a control 304, and a control 305. in:
  • the area 301 may be referred to as a preview frame 301 or a view frame 301.
  • the preview frame 301 can be used to display the images collected by the camera 193 in real time.
  • the electronic device can refresh the displayed content in it in real time, so that the user can preview the image currently collected by the camera 193.
  • One or more shooting mode options may be displayed in the shooting mode list 302.
  • the one or more camera options may include: a photo mode option 302A, a video mode option 302B, a multi-channel photo mode option 302C, a multi-channel video mode option 302D, and more options 302E.
  • the one or more camera options can be expressed as text information on the interface, such as "photograph”, "video”, “multi-channel photo”, “multi-channel video” and “more”. Not limited to this, the one or more camera options may also be represented as icons or other forms of interactive elements (IE) on the interface.
  • IE interactive elements
  • the control 303 can be used to monitor user operations that trigger shooting (photographing or video recording).
  • the electronic device can detect a user operation (such as a click operation on the control 303) acting on the control 303, and in response to the operation, the electronic device 100 can save the image in the preview box 301 as a picture in the "gallery".
  • the control 303 can be changed to the control 901, and the electronic device can detect the user operation (such as a click operation on the control 901) that acts on the control 901.
  • the electronic device 100 can change Save the image in the preview box 301 as a video in the "Gallery”.
  • the "gallery” is a picture management application on electronic devices such as smart phones, tablet computers, etc., and can also be referred to as "album", and the name of the application is not limited in this embodiment.
  • “Gallery” can support users to perform various operations on pictures stored on electronic devices, such as browsing, editing, deleting, and selecting operations.
  • the electronic device 100 may also display a thumbnail of the saved image in the control 304.
  • the user can click the control 303 or the control 901 to trigger the shooting.
  • the control 303 and the control 901 may be buttons or other forms of controls.
  • the control 303 may be referred to as a photographing control
  • the control 901 may be referred to as a video control.
  • the control 303 and the control 901 may be collectively referred to as a shooting control.
  • the control 305 can be used to monitor the user operation that triggers the flip of the camera.
  • the electronic device 100 can detect a user operation (such as a click operation on the control 305) acting on the control 305, and in response to the operation, the electronic device 100 can flip the camera, for example, switch the rear camera to the front camera. At this time, as shown in FIG. 3C, the image collected by the front camera is displayed in the preview frame 301.
  • the electronic device 100 can detect a user operation acting on the shooting mode option, the user operation can be used to select a shooting mode, and in response to the operation, the electronic device 100 can start the shooting mode selected by the user.
  • the electronic device 100 may further display more other shooting mode options, such as slow motion shooting mode options, etc., which can show the user a richer camera function.
  • more shooting mode options 302E may not be displayed in the shooting mode list 302, and the user can browse other shooting mode options by sliding left/right in the shooting mode list 302.
  • the user interface 31 can show the user a variety of camera functions (modes) provided by the "camera", and the user can choose to turn on the corresponding shooting mode by clicking the shooting mode option.
  • modes camera functions
  • the electronic device 100 may display the user interface exemplarily shown in FIG. And the image of the rear camera.
  • the electronic device 100 may enable the multi-channel video recording mode by default after starting the "camera". It is not limited to this, the electronic device 100 may also enable the multi-channel recording mode in other ways.
  • the electronic device 100 may also enable the multi-channel recording mode according to the user's voice instruction, which is not limited in the embodiment of the present application.
  • the preview frame 301 in the multi-channel video mode simultaneously displays images from multiple cameras.
  • the preview frame 301 includes two preview areas: 301A and 301B.
  • the image from the rear camera is displayed in 301A, and the image from the front camera is displayed in 301B.
  • the ISP transmits the output image to the HAL layer, and the HAL layer performs electronic image stabilization (EIS) processing on it, and then the image processing module stitches the two images.
  • the display screen can display the spliced image.
  • the image processing module may include an image processor, a video codec, a digital signal processor, and so on in the electronic device 100.
  • the display screen can also monitor the zoom event and pass the zoom factor to the ISP and the corresponding camera.
  • the display screen can also monitor the event of switching cameras and pass the event to the corresponding camera.
  • FIG. 4B shows a schematic diagram of an ISP processing the image output by the camera in a center cropping manner.
  • the rear camera of the electronic device captures the image a.
  • the electronic device crops the image a to obtain the image a1 in the cropped area.
  • the cropped area is centered on the central point O of the image a, and has the same ratio and size as the area 301A, that is, the cropped area is the area where the dashed box in the figure is located.
  • the image a is displayed in the area 301A, that is, the image a is a preview image in the area 301A.
  • the front camera of the electronic device captures the image b.
  • the electronic device crops the image b to obtain the image b1 in the cropped area.
  • the cropped area is centered on the central point O of the image b, and has the same ratio and size as the area 301B, that is, the cropped area is the area where the dashed box in the figure is located.
  • the image b is displayed in the area 301B, that is, the image b is a preview image in the area 301B.
  • the user can move the electronic device to change the viewfinder of one of the cameras in its corresponding preview area.
  • the mobile electronic device that is, the change of the posture of the electronic device
  • the viewfinder of other cameras in the corresponding preview area will cause the viewfinder of other cameras in the corresponding preview area to also change, and this change may be unnecessary or unexpected by the user.
  • the electronic device is moved to change the viewfinder of one of the cameras in the corresponding preview area, the viewfinder of the other cameras in the corresponding preview area cannot be guaranteed. In other words, the user cannot take into account the framing of each camera in the corresponding preview area during multi-channel shooting.
  • the electronic device can also enter more shooting modes.
  • FIG. 5 exemplarily shows the user interface displayed when the electronic device 100 is in the "multi-channel recording mode" when recording four channels.
  • the preview frame of the user interface can be divided into four areas: area 301A-area 301D.
  • Each area can be used to display images from different cameras.
  • area 301A can be used to display images from the rear wide-angle camera 193-3
  • area 301B can be used to display images from the rear ultra-wide-angle camera 193-4
  • area 301C can be used to display images from the rear telephoto camera 193 -5 image
  • area 301D can be used to display the image from the front camera 193-1.
  • UI user interface
  • the electronic device 100 may automatically enter the "multi-channel recording mode" by default after starting the "camera”. In other embodiments, after starting the "camera", if the electronic device 100 does not enter the "multi-channel recording mode", it may enter the "multi-channel recording mode" in response to a detected user operation. Exemplarily, the electronic device 100 may detect a touch operation (such as a click operation) acting on the multi-channel recording mode option 302D in the user interface 31 shown in FIG. 3B or FIG. Video mode". Not limited to this, the electronic device 100 can also enter the "multi-channel recording mode” in other ways. For example, the electronic device 100 can also enter the "multi-channel recording mode" according to a user's voice instruction, which is not limited in the embodiment of the present application.
  • a touch operation such as a click operation
  • FIG. 6A exemplarily shows the preview interface 41 displayed after the electronic device 100 enters the "multi-channel recording mode".
  • the preview interface 41 includes: a preview box 301, a shooting mode list 302, a control 901, a control 304, and a control 305.
  • the shooting mode list 302, the control 304, and the control 305 can refer to the related description in the user interface 31, which will not be repeated here.
  • the multi-channel recording mode option 302C is selected.
  • the control 901 can be used to monitor user operations that trigger recording.
  • the electronic device 100 After the electronic device 100 enters the "multi-channel recording mode", it can use N (for example, 2) cameras to collect images, and display a preview interface on the display screen. Part or all of the images of each of the N cameras are displayed in the preview interface.
  • N for example, 2
  • the preview frame 301 may include N areas, and one area corresponds to one of the N cameras. Different areas are used to display part or all of the images from the corresponding camera.
  • each area included in the preview frame 301 in the preview frame 301, the size/size occupied by each area in the preview frame 301, and the cameras corresponding to each area can be collectively referred to as the layout mode for multi-channel recording.
  • the areas included in the preview frame 301 do not overlap each other and are jointly stitched into the preview frame 301, that is, the electronic device 100 may display images from N cameras in a stitched manner.
  • the areas included in the preview frame 301 may overlap, that is, the electronic device 100 may display images from N cameras in a floating or overlapping manner.
  • the layout of the multi-channel recording shown in Figure 6A can be: the preview frame 301 is divided into areas 301A and 301B on the left and right, and the area 301A corresponds to the display from the rear wide-angle camera 193-3 The area 301B corresponds to the image from the front camera 193-1.
  • the size of the display screen of the electronic device ie, the screen resolution
  • the ratio is 19.5:9
  • the size of area 301A can be 1248*1080, and the ratio is 10.5:9, where 1248 and 1080 are respectively Is the number of pixels in the length and width of the area 301A
  • the size of the area 301B can be 1088*1080 with a ratio of 10.5:9, where 1088 and 1080 are the number of pixels in the length and width of the area 301A, respectively.
  • the ratio of the total area after the splicing of the area 301A and the area 302A is 19.5:9, which is the same as the ratio of the preview frame 301.
  • the area formed by the splicing of the area 301A and the area 302A covers the display area of the display screen.
  • the image in the area 301A is an image of the subject (such as a person, landscape, etc.) facing the photographer, and the image in the area 301B is an image of the photographer himself facing the display screen 194.
  • the layout of multi-channel recording can be: preview frame 301 includes area 1 and area 2, area 1 occupies all of preview frame 301, area 2 is located at the lower right corner of preview frame 301 and occupies preview frame A quarter of 301; area 1 corresponds to displaying images from the rear ultra-wide-angle camera 193-4, and area 2 corresponds to displaying images from the rear wide-angle camera 193-3.
  • the layout of multi-channel video recording can be: the preview frame 301 is divided into 3 areas, one area corresponding to the image from the rear telephoto camera 193-5, and one area corresponding to the image from the rear telephoto camera 193-5.
  • the image from the rear ultra-wide-angle camera 193-4 is displayed, and one area corresponds to the image from the front camera 193-1.
  • the default number of cameras for multi-channel recording N and the layout mode used by the electronic device 100 can be preset by the electronic device 100, or set independently by the user, or by the user The number and layout of cameras used in the last "multi-channel recording mode".
  • the electronic device 100 may also display controls for the user to change the number and layout of cameras in the preview interface.
  • the electronic device 100 may display a setting interface for setting or changing the number and layout of cameras used in the "multi-channel recording mode” in response to a touch operation (for example, a click operation) acting on the control.
  • the user can set or change the number and layout of the cameras used in the "multi-channel recording mode" through this setting interface.
  • the embodiment of the application does not limit the specific implementation of the setting interface.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel recording mode", it can also respond to a touch operation (for example, a click operation) on the control for switching cameras in the preview interface, and change the corresponding layout mode.
  • a touch operation for example, a click operation
  • a camera in a certain area Exemplarily, the user can click the control 304 in Fig. 6A to change the camera corresponding to the area 301A from the rear wide-angle camera 193-3 to the rear ultra-wide-angle camera 193-4.
  • each area in the preview interface may include a corresponding control for switching cameras, and the electronic device 100 may change the corresponding control in response to a touch operation on the control for switching cameras in the preview area. The camera in the preview area.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel video recording mode", it can also display the identification of the camera corresponding to the area in each area of the preview interface to prompt the user of the source of the image displayed in each area.
  • the logo of the camera can be implemented as text, icons or other forms.
  • the preview images displayed in each area of the preview frame 301 are part or all of the images of each of the N cameras.
  • the preview images displayed in each area of the preview frame 301 may be obtained by the electronic device 100 after cropping the images collected by the corresponding camera. That is, the preview area displays part of the preview image of the corresponding camera.
  • the cutting method may be, for example, center cutting or other cutting methods, which is not limited in this application.
  • the way the electronic device crops the images collected by different cameras may be different.
  • the way the electronic device crops the images collected by the N cameras may be preset by the electronic device, or set independently by the user, or it may be the cropping method used by the electronic device in the latest "multi-channel video mode".
  • Center cropping means that the electronic device 100 takes the center of the image captured by the camera as the center, and crops a part of the image with the same size as the corresponding area from the image.
  • the image captured by the rear wide-angle camera 193-3 is a
  • the preview image displayed in the area 301A is a cropped image of the electronic device 100 with the center of the image a as the center. Part of the same size.
  • the image captured by the front camera 193-1 is b
  • the image displayed in the area 301B is the part of the electronic device 100 that is cut out with the same size as the area 301B with the center of the image b as the center. .
  • the electronic device 100 can directly display the image in the area without cropping. That is, the preview area displays the image collected by the camera.
  • the preview image in the area 301A is a cropped area of the first size with the electronic device centered at point O, from the area 301A corresponding to The image obtained by cropping from the image collected by the camera will be described. That is, after the electronic device 100 turns on the "multi-channel recording mode", the center of the cropped area in the image captured by the camera corresponding to the area 301A is the O point and is the first size.
  • FIG. 6A-FIG. 6F, FIG. 7A-FIG. 7F, FIG. 8A-FIG. 8B, and FIG. 9A-FIG. 9E exemplarily show that after the electronic device 100 enters the "multi-channel video mode", the preview displayed in each area in the preview interface is adjusted Examples of images.
  • the posture of the electronic device 100 has not changed. That is, when the user adjusts the preview image displayed in each area in the preview interface of the electronic device, the electronic device 100 is not moved. In this way, when the user adjusts the framing of one camera in the corresponding preview area, the framing of other cameras in the corresponding preview area can be guaranteed to be unchanged. In other words, the user can take into account the framing of each camera in the corresponding preview area during multi-channel shooting.
  • the posture of the electronic device 100 has not changed, the external environment may change.
  • the camera of the electronic device 100 can collect real-time updated images.
  • the viewfinder of the working camera in the preview area can be adjusted in a non-zoom scene.
  • 6A to 6F exemplarily show how the electronic device 100 adjusts the viewfinder of the working camera in the preview area in a non-zoom scene.
  • the electronic device 100 can detect a sliding operation (for example, a horizontal sliding operation to the left) acting on the area 301A.
  • a sliding operation for example, a horizontal sliding operation to the left
  • the electronic device 100 may update the cropped area in the image a in response to the sliding operation, thereby refreshing the preview image displayed in the area 301A.
  • the image a is an image captured by the camera corresponding to the area 301A.
  • the center of the cropped area in the updated image a is the O1 point of the image a, and the size is the second size.
  • the second size is equal to the first size.
  • O1 is located at O1'. If the area with O1' as the center and the second size exceeds the edge of the image a, then O1 is located at the center of the area of the second size that overlaps the edge in the image a.
  • O1' is determined by the point O of the image a and the sliding track corresponding to the sliding operation. Specifically, O1 is located in the first direction of O, and the first direction is the opposite direction of the sliding track; the distance between O1' and point O is positively correlated with the length of the sliding track. In some embodiments, the distance between O1' and O point is the same as the length of the sliding track. Among them, O is the center of the crop area before the update, and the first size is the size of the crop area before the update.
  • FIG. 6C exemplarily shows the updated crop area of the electronic device 100, and the updated crop area is the area where the dashed frame is located.
  • the preview image displayed in the refreshed area 301A is the image a1.
  • the distance between the O1 and O points may be the default distance. In other words, when the user swipes quickly, the electronic device determines O1 according to the default distance.
  • the electronic device 100 can also detect a sliding operation (for example, a horizontal sliding operation to the right) acting on the area 301B.
  • a sliding operation for example, a horizontal sliding operation to the right
  • the user interface shown in FIG. 6D and FIG. 6B are the same, and you can refer to related descriptions.
  • the electronic device 100 may update the cropped area in the image b in response to the sliding operation, thereby refreshing the preview image displayed in the area 301B.
  • the way the electronic device updates the cropped area in the image b may refer to the way the electronic device updates the cropped area in the image a.
  • FIG. 6F and FIG. 6F exemplarily show the cropped area in the image b after the electronic device 100 is updated, and the updated cropped area is the area where the dashed frame is located.
  • the preview image displayed in the refreshed area 301B is the image b1.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel recording mode", it can support the user to adjust the viewfinder of each working camera in its corresponding preview area during multi-channel recording, and realize each The framing of the working cameras in the respective preview areas does not affect each other, and there will be no problem that the framing of other working cameras in the corresponding preview area will change at any time due to the change of the framing of a certain working camera in the corresponding preview area.
  • Such a viewfinder method in multi-channel video recording is more flexible and convenient, and can improve user experience.
  • the electronic device can display a preview interface and part or all of the images collected by each of the N (for example, 2) cameras.
  • the preview interface includes N areas, and part or all of the images collected by each of the N cameras are respectively displayed in the N areas.
  • a part of the image collected by the corresponding camera is displayed in a preview area.
  • all the images captured by the corresponding camera may be displayed in a preview area.
  • the first area (for example, area 301A) may be one of the N areas, and the camera corresponding to the first area may be called the first camera (for example, the camera corresponding to area 301A).
  • the first area may display images obtained by the electronic device by cropping all the images collected by the first camera before the cropping mode is changed according to the user operation.
  • the embodiment of the present application does not limit the manner in which the electronic device crops all the images collected by the first camera.
  • the embodiment of the present application crops all the images collected by the first camera by a center cropping method for the electronic device.
  • the electronic device can change the way of cropping all the images collected by the first camera according to the sliding operation, thereby refreshing the preview image displayed in the first area.
  • the electronic device crops all the images collected by the first camera in different ways. Among all the images collected by the first camera, the preview images before and after refresh have different positions in all the images collected by the first camera.
  • the position of the preview image displayed in the first area before the refresh may be, for example, the position of the cropped area before the update in FIG. 6A to FIG. 6F, and the position of the preview image displayed in the first area before the refresh may be, for example, FIG. 6A -The location of the updated crop area in Figure 6F.
  • the center position of the preview image displayed in the first area before refreshing points and the direction of the center position of the preview image displayed in the first area after refreshing is the same as the direction of the center position of the preview image displayed in the first area after refreshing.
  • the sliding direction of the sliding operation is opposite.
  • the preview image displayed in the first area after refresh is closer to the right boundary of all the images collected by the first camera than the preview image displayed in the first area before refresh.
  • the sliding user operation is a right sliding operation
  • the preview image displayed in the first area after refresh is closer to the left boundary of all the images collected by the first camera than the preview image displayed in the first area before refresh.
  • the preview image displayed in the first area before refreshing coincides with the center position of all the images captured by the first camera.
  • the preview image displayed in the first area is the same size before and after the refresh.
  • the viewfinder of the working camera in the preview area can be adjusted in the zoom scene. Zooming means that the preview image displayed in each area in the preview interface is enlarged or reduced.
  • FIGS. 7A-7F exemplarily show how the electronic device 100 adjusts the viewfinder of the working camera in the preview area in a zoom scene.
  • the user interface 51 shown in FIG. 7A is a preview interface displayed after the electronic device 100 enters the "multi-channel recording mode".
  • the user interface 51 includes a preview box 301, a shooting mode list 302, a control 901, a control 304, and a control 305.
  • the preview frame 301 includes an area 301A and an area 301B. The function of each control in the user interface 51 and the preview image displayed in each area can refer to the related description in the user interface 41 shown in FIG. 6A, which will not be repeated here.
  • the electronic device 100 can detect a two-finger zoom gesture in the area 301A (as shown in the figure in which two fingers slide outward at the same time), and in response to the two-finger zoom gesture, A control 306 for indicating the zoom factor of the corresponding camera is displayed in the area 301A, and the cropped area in the image a is updated, thereby refreshing the preview image displayed in the area 301A.
  • the image a is an image captured by the camera corresponding to the area 301A.
  • the control 306 may be implemented as an icon or text, and the zoom factor of the corresponding camera indicated by the control 306 changes with the change of the two-finger zoom gesture.
  • the two-finger zoom-in gesture is a two-finger zoom-in gesture
  • the greater the amplitude of the gesture the greater the zoom factor of the corresponding camera.
  • the two-finger zooming gesture is a two-finger zooming out gesture
  • the greater the amplitude of the gesture the smaller the zoom factor of the corresponding camera.
  • the text "1x" in FIG. 7A indicates that the zoom factor of the camera is 1
  • the text "2x" in the control 306 in FIG. 7B indicates that the zoom factor of the camera is 2.
  • the zoom factor of the camera corresponding to the area 301A is 1, and after receiving the two-finger zoom gesture, the zoom factor of the camera is x1 as an example for description.
  • the center of the cropped area in the updated image a is the O point of the image a, and the size is the third size.
  • the length of the third dimension is 1/x1 of the length of the first dimension
  • the width of the third dimension is 1/x1 of the width of the first dimension. That is, the first size is 1/x12 of the first size.
  • O is the center of the crop area before the update
  • the first size is the size of the crop area before the update.
  • FIG. 7C exemplarily shows the updated crop area of the electronic device 100, and the updated crop area is the area where the dashed frame is located.
  • the preview image displayed in the refreshed area 301A is the image a2.
  • the electronic device 100 can enlarge the pixels of the image a2 using an "interpolation" processing method, thereby enlarging the image a2 to the entire area 301A for display.
  • the two-finger zoom-in gesture when the two-finger zoom-in gesture is a two-finger zoom-in gesture, if the amplitude of the two-finger zoom-in gesture exceeds the first preset value, the camera corresponding to the area 301A can be automatically switched to a camera with a larger focal length. , Such as switching from a wide-angle camera to a telephoto camera.
  • the two-finger zoom-out gesture is a two-finger zoom-out gesture
  • the amplitude of the two-finger zoom-out gesture exceeds the second preset value
  • the camera corresponding to area 301A can be automatically switched to a camera with a smaller focal length, such as a wide-angle camera Switch to an ultra-wide-angle camera.
  • 7D-7F exemplarily show how the electronic device 100 adjusts the viewfinder of the working camera in the preview area in a zoom scene.
  • the user interface 51 shown in FIG. 7D is the same as the user interface 51 shown in FIG. 7B, and reference may be made to related descriptions.
  • the electronic device 100 can detect a sliding operation (for example, a horizontal sliding operation to the left) acting on the area 301A.
  • a sliding operation for example, a horizontal sliding operation to the left
  • the embodiment of the present application does not limit the direction and track of the sliding operation.
  • the electronic device 100 may update the cropped area in the image a again in response to the sliding operation, thereby refreshing the preview image displayed in the area 301A.
  • the image a is an image captured by the camera corresponding to the area 301A.
  • the manner in which the electronic device updates the cropping area of the image a again in response to the sliding operation is the same as the manner in which the electronic device updates the cropping area of the image a in response to the sliding operation shown in FIGS.
  • the center of the cropped area in the image a that is updated again is O2, and is the third size.
  • O2 is located at O2'. If the area with O2' as the center and the third size exceeds the edge of the image a, then O2 is located at the center of the area of the third size that overlaps the edge in the image a.
  • O2' is determined by the O point of the image a and the sliding track corresponding to the sliding operation. Specifically, O2' is located in the first direction of O, and the first direction is the opposite direction of the sliding track; the distance between O2' and point O is positively correlated with the length of the sliding track. In some embodiments, the distance between O2' and O point is the same as the length of the sliding track.
  • FIG. 7F exemplarily shows the cropped area after the electronic device 100 is updated again, and the cropped area after the updated again is the area where the dashed frame is located.
  • the preview image displayed in the refreshed area 301A is the image a3.
  • the electronic device 100 may automatically switch the camera corresponding to the area 301A.
  • the electronic device 100 can switch the camera corresponding to the area 301A to a camera with a larger field of view.
  • the electronic device 100 may switch it to a wide-angle camera. This can fully meet the user's need to adjust the framing of each area within a larger field of view.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel recording mode", it can support the user to adjust the framing of each working camera in its corresponding preview area during multi-channel recording.
  • the framing of the working cameras in the respective preview areas does not affect each other, and there will be no problem that the framing of other working cameras in the corresponding preview area will change at any time due to the change of the framing of a certain working camera in the corresponding preview area.
  • Such a viewfinder method in multi-channel video recording is more flexible and convenient, and can improve user experience.
  • the electronic device 100 After the electronic device 100 adjusts the preview image of each area in the preview interface in the zoom scene, it can zoom again to increase or decrease the zoom factor of the camera corresponding to a certain area.
  • the description will be made by taking the electronic device 100 after adjusting the viewfinder of the working camera in the preview area in the zooming scene as shown in FIGS. 7A-7F and then zooming again for the camera corresponding to the area 301A as an example.
  • the electronic device 100 after the electronic device 100 adjusts the preview image of each area in the preview interface in the zoom scene, it can detect a two-finger zoom gesture (for example, a gesture of two fingers sliding outward at the same time) acting on the area 301A, and respond With this two-finger zoom-in gesture, the cropped area in the image a is updated, thereby refreshing the preview image displayed in the area 301A.
  • the image a is an image captured by the camera corresponding to the area 301A.
  • the manner in which the electronic device 100 updates the crop area in the image a in response to the two-finger zoom gesture is the same as the manner in which the electronic device shown in FIGS. 7A-7C updates the crop area in the image a in response to the two-finger zoom gesture. Refer to the related description.
  • FIG. 8A shows a possible cropped area in the updated image a.
  • the preview image displayed in the refreshed area 301A is an image a4.
  • FIG. 8B shows another possible cropped area in the updated image a.
  • the preview image displayed in the refreshed area 301A is an image a4.
  • the electronic device may also detect an operation for changing the zoom factor of the camera corresponding to the first area before detecting the sliding operation (for example, the sliding operation in FIGS. 7A-7F) (for example, the operation in FIGS. 7A-7F) Two-finger zoom operation). After that, the electronic device may respond to the operation, enlarge the preview image displayed in the first area before receiving the operation, and display the enlarged preview image in the first area. It should be noted that after the zoom factor of the camera corresponding to the first area is changed, a part of the enlarged image is displayed in the first area. For details, please refer to the related descriptions of FIGS. 7A-7F.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel video mode", it can track the target object and autonomously adjust the viewfinder of the working camera in the preview area according to the location of the target object. This can reduce user operations and improve convenience.
  • Figures 9A-9E exemplarily show how the electronic device tracks the target object and autonomously adjusts the viewfinder of the working camera in the preview area.
  • the user interface 71 shown in FIG. 9A is a preview interface displayed after the electronic device 100 enters the "multi-channel recording mode".
  • the user interface 71 includes a preview box 301, a shooting mode list 302, a control 901, a control 304, and a control 305.
  • the preview frame 301 includes an area 301A and an area 301B. The function of each control in the user interface 71 and the preview image displayed in each area can refer to the related description in the user interface 41 shown in FIG. 6A, which will not be repeated here.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel video mode", it can automatically identify objects in the preview image displayed in each area of the preview frame 301, and prompt the user when a preset type of object is detected.
  • the preset types of objects may include: human faces, animals, human bodies, the sun, the moon, and so on.
  • the preset type of objects may be set by the electronic device 100 by default, or may be independently selected by the user.
  • the electronic device 100 after the electronic device 100 enters the "multi-channel recording mode", it can start to recognize the objects in the preview image displayed in each area in the preview frame 301 in response to the received user operation, and when it detects that there is a predetermined Prompt the user when the type of object is set.
  • the user operation may be a long-press operation or a double-click operation on the area, an input voice command, etc., which is not limited in the embodiment of the present application.
  • the electronic device 100 may detect that a human face is displayed in the area 301B, and display prompt information 307 in the area 301B.
  • the prompt information 307 is used to prompt the user to detect a face, and the prompt information 307 may be the text "face detected".
  • the electronic device 100 may detect a touch operation (such as a click operation) acting on an object in the area 301B (such as a human face in FIG. 9B), and the touch operation is used for The object is selected as the target object to be tracked.
  • the electronic device 100 may display prompt information in the area where the target object is displayed in the area 301B, such as the dashed box shown in FIG. Set this object as the target object to be tracked.
  • the electronic device 100 may also directly select a preset type of object as the target object to be tracked without user operation.
  • the electronic device 100 After the electronic device 100 selects the target object to be tracked, it will update the cropped area in the image b with the target object as the center, thereby refreshing the preview image displayed in the area 301B.
  • the image b is an image captured by the camera corresponding to the area 301B.
  • the prompt information 307 in the area 301B can be used to prompt the user that the target object is currently being tracked.
  • the prompt message 307 can be changed to the text "face tracking".
  • the center of the cropped area in the updated image b is O4 in the image b, and it is the first Four sizes.
  • O4 is the center of the cropped area of the image b before the update
  • the fourth size is the size of the cropped area of the image b before the update.
  • O4 is located at O4'. If the area with the fourth size centered on O4' exceeds the edge of the image b, then O4 is located at the center of the area with the fourth size that coincides with the edge in the image b. O4' is the center of the target object in image b.
  • 9D and 9E which exemplarily show the updated crop area of the electronic device 100, and the updated crop area is the area where the dashed frame is located.
  • the preview image displayed in the refreshed area 301B is the image in the dashed frame.
  • the target face in the image collected by the front camera 193-1 has changed position, but the electronic device 100 still displays the target face in the center of the area 301B.
  • the electronic device 100 may stop tracking the target person.
  • the electronic device 100 may prompt the user that the target person has stopped tracking currently.
  • the prompt manner may include, but is not limited to: displaying text, displaying icons, playing voice, and so on.
  • the electronic device 100 can track the target object in the multi-channel video recording process to meet user needs and improve user experience.
  • the electronic device may A preview image is displayed in an area (for example, area 302B in FIGS. 9A to 9C).
  • the electronic device refreshes the preview image in the first area.
  • the preview image displayed in the first area before refresh is obtained by cropping all the images collected by the first camera, and includes the image of the first human face.
  • the preview image displayed in the refreshed first area is obtained by cropping all the images collected by the first camera, and includes the image of the first human face.
  • the preview image displayed in the first area before refresh may refer to the preview image displayed in area 301B in FIG. 9A or FIG. 9B
  • the preview image displayed in the first area after refresh may refer to the area in FIG. 9C Preview image displayed in 301B.
  • the way the electronic device crops the preview image displayed in the refreshed first area may be: ensuring the position of the image of the first face in the refreshed first area, and the image of the first face The position in the first area before refresh is the same. This can ensure that the position in the first area of the face is fixed.
  • the electronic device crops the preview image displayed in the refreshed first area according to the position of the image of the first face in all the images collected by the first camera as the center, for example, as shown in FIG. 9A. In this way, it is possible to keep the face always displayed in the center in the first area during face tracking.
  • the second camera when the electronic device detects that all the images collected by the first camera include images of the first face, the second camera may be activated, and the viewing range of the second camera is larger than the viewing range of the first camera. The first face is within the viewing range of the second camera.
  • the electronic device may refresh the preview image displayed in the first area.
  • the refreshed preview image displayed in the first area is obtained by cropping all the images collected by the second camera, and includes the image of the first face. This can ensure that when tracking objects, switch to the camera to expand the trackable range.
  • the first camera is a front camera or a rear camera. In this way, you can use the front camera to achieve object tracking, and you can also use the rear camera to achieve object tracking.
  • the user can also be prompted through a picture-in-picture that the working camera is displayed in the preview area The position of the preview image in all captured images. This allows users to understand the overall situation.
  • 10A-10B exemplarily show a scene in which the electronic device 100 prompts the user to work the position of the preview image displayed in the preview area by the camera in the preview area in all the captured images by means of picture-in-picture.
  • the user interface 81 shown in FIGS. 10A and 10B is a preview interface displayed after the electronic device 100 enters the "multi-channel recording mode".
  • the preview interface For each control in the preview interface, reference may be made to the related description in the user interface 41 shown in FIG. 6A, which will not be repeated here.
  • a window 308 may be displayed in the user interface 81.
  • the window 308 can be displayed floating above the image displayed in the area 301A.
  • the window 308 may be used to prompt the user the position of the preview image displayed in the current area 301A in all the images captured by the corresponding rear wide-angle camera 193-3.
  • the window 308 can display the images captured by the rear wide-angle camera 193-3 corresponding to the area 301A, and all the images captured by the camera in the preview image displayed in the area 301A are identified by a dashed frame In the location.
  • the inner and outer parts of the dashed frame can use different display formats.
  • the part outside the dashed frame can be shaded to further distinguish the cropped and cropped images a captured by the rear wide-angle camera 193-3.
  • the part displayed in area 301A is a dashed frame.
  • the user can understand the viewing range of each camera in the corresponding area through the window displayed on each area, and the global position of the preview image currently displayed in each area. This makes it easier for users to adjust the preview images displayed in each area.
  • picture-in-picture prompting mode exemplarily shown in FIGS. 10A-10B is applicable to any of the scenarios in which the electronic device 100 mentioned in the above embodiment adjusts the preview image displayed in each area of the preview interface.
  • the electronic device 100 adjusts the preview image displayed in each area of the preview interface in the non-zoom scene shown in FIGS. 6A-6F, it can use a picture-in-picture method to prompt the user that the preview image displayed in the area 301A is captured by the camera.
  • the position of the dashed frame in the window 308 changes with the sliding operation input by the user.
  • the moving direction of the dashed frame is opposite to the track direction of the sliding operation.
  • the electronic device 100 adjusts the preview image displayed in each area in the preview interface in the zoom scene shown in FIGS. 7A-7F, it can use a picture-in-picture method to prompt the user that the preview image displayed in the area 301A is collected by the camera.
  • the size of the dashed frame is inversely proportional to the zoom factor. The larger the zoom factor, the smaller the dashed frame.
  • the moving direction of the dashed frame is opposite to the trajectory direction of the sliding operation input by the user.
  • the electronic device autonomously adjusts the preview image of each area in the preview interface according to the location of the target object as shown in FIGS. 9A-9E, it can use a picture-in-picture method to prompt the user that the preview image displayed in the area 301A is collected by the camera The position in all images. In this case, the position of the dashed frame changes as the position of the target object in the image captured by the camera changes.
  • the user can not move the electronic device 100, that is, in the posture of the electronic device 100
  • the framing of each working camera is adjusted through user operations, and the adjustment of the framing of a single working camera does not affect the framing of other working cameras.
  • the electronic device 100 detects an operation to adjust the preview image displayed in each area of the preview interface (such as the sliding operation, the two-finger zoom operation or the operation of selecting the target object mentioned in the above embodiment), if If the posture of the electronic device changes, the electronic device may not change the way of cropping all the images collected by the camera corresponding to each area in response to the user operation. That is, if the posture of the electronic device changes, the electronic device does not respond to the user operation for adjusting the viewfinder of each camera in the preview area.
  • an operation to adjust the preview image displayed in each area of the preview interface such as the sliding operation, the two-finger zoom operation or the operation of selecting the target object mentioned in the above embodiment
  • the electronic device can use a center cropping method to obtain the preview images displayed in each area of the preview interface.
  • the following describes a UI example of adjusting the preview image displayed in each area of the shooting interface during the recording process of the multi-channel video recording after the electronic device 100 turns on the "multi-channel video mode".
  • 11A-11F exemplarily show a UI embodiment in which the electronic device 100 adjusts the preview image displayed in each area in the shooting interface during the multi-channel video recording process.
  • FIG. 11A exemplarily shows the shooting interface 101 displayed when the electronic device enters the video recording process after the "multi-channel video recording mode" is turned on.
  • the shooting interface 101 includes a preview box 301, a shooting mode list 302, a control 901, a control 304, and a control 305.
  • the shooting mode list 302, the control 304, and the control 305 can refer to the related description in the user interface 31, which will not be repeated here.
  • the multi-channel recording mode option 302D is selected.
  • the photographing interface 101 may be displayed by the electronic device 100 in response to a touch operation (for example, a click operation) received on a control for recording.
  • the control used for recording may be, for example, the control 901 displayed in any user interface of FIGS. 11A to 11F.
  • the controls used for video recording may also be referred to as shooting controls.
  • the shooting interface 101 further includes: a recording time indicator 1001.
  • the recording time indicator 1001 is used to indicate the length of time for the user to display the shooting interface 101, that is, the length of time for the electronic device 100 to start recording a video.
  • the recording time indicator 1001 may be implemented as text.
  • the preview frame 301 in the shooting interface is the same as the preview frame 301 in the preview interface.
  • the layout of the preview frame 301 please refer to the related description of the embodiment in FIG. 6A, which will not be repeated here.
  • the preview image displayed in each area of the shooting interface can also be adjusted according to user operations.
  • the way the electronic device 100 adjusts the preview image displayed in each area in the shooting interface during the video recording process is the same as the way the electronic device 100 adjusts the preview image displayed in each area in the preview interface during the preview process.
  • FIG. 6F The embodiments shown in Fig. 6F, Fig. 7A-7F, Fig. 8A-8B, and Fig. 9A-9E.
  • the electronic device 100 may also save the image displayed in the preview interface during the recording process during the recording process after the electronic device 100 enters the "multi-channel photographing mode".
  • the user can select to save the preview image in the preview box during the video recording process after adjusting the preview image of each area in the preview box, that is, save video.
  • save video For the manner in which the user adjusts the preview image of each area in the preview box, refer to the related content described in the foregoing embodiment in FIG. 11A to FIG. 11F.
  • the electronic device 100 may respond to a touch operation (such as a click operation) detected on a control for recording during the recording process, and preview the image in the preview box during the recording process.
  • the control used for recording may be, for example, the control 901 displayed in any user interface of FIGS. 11A to 11F.
  • the start and end times of the recording process are respectively the time points of two adjacently detected touch operations on the control 901 after the electronic device 100 turns on the "multi-channel recording mode".
  • the electronic device 100 may synthesize the images displayed in each area in the preview box during the recording process into a video file, and save the video file. For example, the electronic device 100 may combine the image displayed in the area 301A and the layout corresponding to the preview image displayed in the area 302B during the recording process into a video file, and save the video file. This allows users to adjust the preview images of each area according to their own needs, and save the preview images they want, which allows users to obtain a more flexible and convenient video recording experience.
  • the electronic device 100 may also separately save the images displayed in each area during the recording process, and associate the saved multiple images.
  • the user can change the layout of the preview interface during the recording process. If the electronic device 100 changes the layout of the preview interface during the recording process, the layout of the video files saved by the electronic device 100 may be different in different time periods. This can provide users with a more flexible recording experience.
  • the electronic device 100 After the electronic device 100 stores the preview image displayed in the preview box as a video file, the user can view the video file saved by the electronic device 100 in the user interface provided by the "gallery".
  • the embodiment of the present application also provides a solution for supporting the electronic device not to change the viewing range of the selected preview area when the posture of the mobile electronic device, that is, the electronic device changes. In this way, even when the posture of the electronic device is changed, the viewing range of the selected preview area is not affected.
  • one or more areas in the preview frame can be locked during the preview process or the recording process. After that, even if the physical position of the electronic device 100 changes, for example, the electronic device 100 shifts, the relative position of the static object in the image displayed in the locked area does not change in the area. In this way, when the user moves the electronic device 100 to change the image displayed in other areas, it is ensured that the locked area always displays an image of a certain physical location in the real world, that is, the viewing range of the locked area is not changed.
  • 12A-12B exemplarily show an exemplary UI interface in which the electronic device 100 does not change the viewing range of the selected preview area when the posture of the electronic device 100 is changed.
  • the preview interface 111 displayed in response to the operation for locking the area 301B after the electronic device 100 enters the "multi-channel recording mode".
  • the preview interface 111 includes a preview box 301, a shooting mode list 302, a control 303, a control 304, a control 305, and a lock indicator 1101.
  • the preview frame 301 includes an area 301A and an area 301B.
  • the function of each control in the user interface 71 and the preview image displayed in each area can refer to the related description in the user interface 41 shown in FIG. 6A, which will not be repeated here.
  • the lock indicator 1101 is located in the area 301B and is used to indicate that the area 301B is locked.
  • the lock indicator 1101 may be implemented as text, icon, or other forms.
  • the operation for locking the area 301B may include, but is not limited to: a long press operation on the area 301B, a double-click operation, a touch operation on a specific control (not shown in FIG. 12A), and an operation of shaking the electronic device 100 and many more.
  • the user can hold the electronic device 100 and move it horizontally to the left. After the electronic device 100 is moved horizontally, the images collected by the rear wide-angle camera 193-3 corresponding to the area 301A and the front camera 193-1 corresponding to the area 301B are all refreshed or updated.
  • the electronic device 100 crops all the images collected by the rear wide-angle camera 193-3 according to the cropping method corresponding to the current area 301A, and displays them in the area 301A.
  • the cropping method may be center cropping or other cropping methods determined according to user operations.
  • the electronic device 100 In response to the operation of being moved horizontally to the left, the electronic device 100 keeps the display mode of displaying the static object in the image when the area 301B is locked in the locked area 301B unchanged.
  • the display mode of the static object includes: the size of the static object and the relative position of the static object in the area 301B.
  • the electronic device 100 ensures that the locked area displays an image of the same physical location in the real world, and the physical location corresponds to the preview image displayed in the area when the area is locked.
  • the area 301B still displays an image of the same physical location.
  • the display mode of clouds, buildings, and roads in the figure remains unchanged, and the characters in the figure change where they stand. s position.
  • the extended embodiment in the eyes of the user, in the process of moving the electronic device 100, it can be ensured that one or more areas are locked, that is, it can be guaranteed that the preview image displayed in the locked one or more areas is always Correspond to the same physical location. This can help the user to take into account the multi-channel images in the multi-channel shooting process.
  • the extended embodiment can be applied to the preview process in the "multi-channel recording mode" mentioned in the embodiment of this application, the preview process in the "multi-channel recording mode” and the recording in the "multi-channel recording mode".
  • the electronic device 100 can lock one or the other of the preview boxes during the preview process in the "multi-channel recording mode", the preview process in the "multi-channel recording mode", and the recording process in the "multi-channel recording mode".
  • the specific implementation of the multiple areas can be obtained by combining the preview process and the video recording process described in the foregoing embodiments, and will not be repeated here.
  • the electronic device may also adjust the preview image displayed in each area in the preview interface after entering the "multi-channel photography mode". That is, the electronic device can also adjust the viewfinder mode of the working camera in the preview area in the "multi-channel photography mode".
  • the electronic device can adjust the viewfinder mode of the working camera in the preview area.
  • the electronic device in the "multichannel video mode” adjust the viewfinder mode of the working camera in the preview area, refer to The related description in the previous section will not be repeated for the time being.
  • Figures 13A-13B exemplarily show a scene in which an electronic device enters the "multi-channel photographing mode" and adjusts the preview image displayed in each area of the preview interface in response to a sliding operation.
  • data can be collected through N cameras.
  • Each of the N cameras outputs a frame according to the default output ratio, and transmits the collected raw data to the corresponding ISP.
  • the default output ratio of the camera can be, for example, 4:3, 16:9, 3:2, and so on.
  • ISP is used to convert the data from the camera into an image in a standard format, such as YUV.
  • the display screen can monitor user operations used to adjust the preview image of each area in the display screen, and report the monitored user operations to the camera or the HAL layer.
  • This user operation may include, but is not limited to, the sliding operation, the two-finger zoom operation, and the sliding operation detected on each area in the preview frame after the electronic device 100 enters the "multi-channel shooting mode" mentioned in the above UI embodiment.
  • the display screen can monitor zoom events and pass the zoom factor to the HAL layer and the corresponding camera.
  • the display screen can also monitor the event of switching cameras and pass the event to the corresponding camera.
  • the display screen can monitor the sliding operation and pass the trajectory of the sliding operation to the HAL layer.
  • the HAL layer is used to crop the image output by the ISP according to user operations.
  • the HAL layer crops the image output by the ISP in a center cropping manner.
  • the HAL layer crops the image output by the ISP according to the user operation.
  • the user operations for adjusting the preview image of each area in the display screen may include but are not limited to those mentioned in the above UI embodiment
  • the electronic device 100 detects on each area in the preview frame after entering the "multi-channel shooting mode" The sliding operation, the two-finger zoom operation and the sliding operation, the touch operation acting on the target object, and so on.
  • the HAL layer crops the image output by the ISP according to the user operation please refer to the relevant description in the above UI embodiment.
  • the HAL layer can notify the ISP of its own cropping of the image output by the ISP, and the ISP will perform auto exposure, auto white balance, and auto focus (auto exposure, auto white balance, Autofocus, 3A) processing can also optimize the noise, brightness, and skin color of the cropped image.
  • the image that has been cropped by the HAL layer and processed by ISP's 3A and optimization is passed to the image processing module, which is used to perform electronic image stabilization processing on the received image.
  • the HAL layer can also notify the image processing module of its own cropping of the image output by the ISP, so that the image processing module performs anti-shake processing on the received image according to the cropping method.
  • the image processing module can process each of the received images, and then N channels of images can be obtained.
  • the image processing module can stitch or superimpose the obtained N channels of images into one image according to the current layout style, and output them to the preview interface of the display screen.
  • the preview interface can display the N channels of images in N areas according to the current layout style.
  • the image processing module may include an image processor, a video codec, a digital signal processor, etc. in the electronic device 100.
  • the electronic device 100 if the electronic device 100 enters the "multi-channel photography mode" and the display screen detects a touch operation detected on the shooting control, the electronic device will save the output of the image processing module when the touch operation is detected. To the image in the preview interface of the display.
  • the electronic device 100 if the electronic device 100 turns on the "multi-channel recording mode" and the display screen detects two touch operations detected on the shooting control, the electronic device will save the two touch operations in the middle of the two touch operations.
  • the image processing module outputs the image in the preview interface of the display screen.
  • the area 301A or the area 301B in the preview frame 301 may be referred to as the first area.
  • the camera corresponding to the area 301A or the area 301B, for example, the front camera or the rear camera may be referred to as the first camera.
  • the sliding operation received in the area 301A or the area 301B in the preview interface may be referred to as a first user operation.
  • the sliding operation in FIG. 6A, FIG. 6D, and FIG. 7D may be referred to as a first user operation.
  • the operation of instructing to start recording a video detected by the electronic device in the shooting interface may be referred to as a second user operation.
  • the second user operation may be, for example, an operation acting on the shooting control 901, for example, an operation acting on the shooting control 901 in FIG. 11A.
  • the preview image displayed in the area 301A or the area 301B may be referred to as the first preview image, such as the image displayed in the area 301A or the area 301B in the embodiment of FIGS. 6A-6F, and, The image (image a2) displayed in the area 301A in FIG. 7D.
  • the preview image displayed in the area 301A or the area 301B may be referred to as the second preview image, for example, the image displayed in the area 301A or the area 301B in the embodiment of FIGS. 6A-6F (image a1 Or b1), and the image displayed in the area 301A in FIG. 7E.
  • the two-finger zoom-in operation received in the area 301 or the area 301B in the preview interface may be referred to as a third user operation.
  • a third user operation For example, the two-finger zoom-in operation shown in FIG. 7A.
  • the sliding operation received by the area 301 or the area 301B in the shooting interface may be referred to as a fourth user operation.
  • the sliding operation in FIG. 11A the sliding operation in FIG. 11A.
  • the preview image displayed in the area 301A or the area 301B in the shooting interface may be referred to as a third preview image, such as the image displayed in the area 301A in FIG. 11A.
  • the image displayed by the electronic device in the first area may be referred to as a fourth preview image.
  • the image displayed in the first area may be referred to as a fifth preview image, such as the image shown in area 301B in FIG. 9A.
  • the image displayed by the electronic device in the first area may be called the sixth preview image, such as shown in FIG. 9B or FIG. 9C. The image shown.
  • the switched camera can be referred to as the second camera.
  • the image from the second camera displayed in the first area can be referred to as the seventh preview image.
  • the image displayed by the electronic device in the first area may be referred to as an eighth preview image.
  • the operation that the electronic device detects in the shooting interface and instructs to stop recording the video may be referred to as the fifth user operation.
  • the fifth user operation may be, for example, an operation acting on the shooting control 901, such as an operation acting on the shooting control 901 in FIGS. 11B to 11F.
  • the operation detected by the electronic device for playing the video file may be referred to as the sixth user operation.
  • the interface used to play the video file in the electronic device may be called a play interface.
  • the operation for locking the area may be referred to as a seventh user operation.
  • the seventh user operation may be, for example, a long-press operation or a double-click operation acting on the area 301B.
  • the operation of instructing to start recording a video detected by the electronic device may be referred to as an eighth user operation.
  • the eighth user operation may be, for example, a click operation acting on the shooting control 901 in FIG. 12B.
  • the preview image displayed in the area 301A or the area 301B may be referred to as a ninth preview image, such as the image displayed in the area 301A in FIG. 12A.
  • the preview image displayed in the area 301A or the area 301B may be referred to as the tenth preview image, such as the image displayed in the area 301A in FIG. 12B.
  • the following embodiment introduces the multi-channel video framing method provided by the present application. As shown in Figure 15, the method may include:
  • Phase 1 (S101-S105): Turn on "multi-channel recording mode"
  • the electronic device 100 starts a camera application.
  • the electronic device 100 may detect a touch operation (such as a click operation on the icon 220) acting on the icon 220 of the camera as shown in FIG. 3A, and start the camera application in response to the operation.
  • a touch operation such as a click operation on the icon 220
  • S102 The electronic device 100 detects a user operation that selects the "multi-channel recording mode".
  • the user operation may be a touch operation (for example, a click operation) on the multi-channel recording mode option 302D shown in FIG. 3B or FIG. 3D.
  • the user operation may also be other types of user operations such as voice commands.
  • the electronic device 100 may select the "multi-channel recording mode" by default after starting the camera application.
  • S103 The electronic device 100 activates N cameras, where N is a positive integer.
  • the electronic device may have M cameras, M ⁇ 2, M ⁇ N, and M is a positive integer.
  • the N cameras can be a combination of a front camera and a rear camera.
  • the N cameras can also be a combination of any number of cameras in a wide-angle camera, an ultra-wide-angle camera, a telephoto camera, or a front camera. This application does not limit the camera combination mode of the N cameras.
  • the N cameras can be selected by default for the electronic device. For example, the electronic device turns on the two cameras of the front camera and the rear camera by default.
  • the N cameras can also be selected by the user. For example, the user can select which cameras to turn on in the "More" mode option.
  • S104 The electronic device 100 collects images through the N cameras.
  • the electronic device 100 displays a preview interface.
  • the preview interface includes N areas. Part or all of the images collected by the N cameras can be displayed in the N areas.
  • the preview interface includes an area 301A and an area 301B.
  • the area 301A displays part of the image collected by the rear camera
  • the area 301B displays part of the image collected by the front camera.
  • the images displayed in each of the N areas can be referred to as preview images.
  • the preview image displayed in an area can be obtained by cropping all the images collected by the camera corresponding to the area.
  • the preview image displayed in area 301A can be cropped by the electronic device from all the images collected by the rear camera, and the preview image displayed in area 301B is the electronic device collected from the front camera. Cropped from all the images. Specifically, the center position of the preview image displayed in the area 301A may coincide with the center position of all images captured by the rear camera, and the center position of the preview image displayed in the area 301B may coincide with the center position of all images captured by the front camera. . At this time, the preview images displayed in the area 301A and the area 301B are obtained by a center cropping method.
  • the size of the cropped area of the preview image displayed in the area 301A is cropped from all the images collected by the rear camera, which can be the same size as the size of the area 301A.
  • the size of the cropped area of the preview image displayed in the area 301B is cropped from all the images collected by the front camera, which can be as large as the size of the area 301B.
  • the preview image displayed by the camera in its corresponding area may be all the images collected by the camera.
  • the user can reduce the zoom magnification by pinching two fingers in the area 301A, so as to view all the images captured by the rear camera in the area 301A.
  • the operation of pinching two fingers can also be referred to as a pinch-out operation.
  • Stage 2 Adjust the viewing range of a camera in the preview interface
  • the electronic device 100 detects a first user operation in the first area.
  • the first area may be one of the N areas, the first preview image may be displayed in the first area, and the first preview image is obtained by cropping all the images collected by the first camera.
  • the first area may be an area 301A
  • the first preview image may be a preview image displayed in the area 301A
  • the first camera may be a rear camera.
  • the first user operation may be a sliding operation in the area 301A, such as a left sliding operation, a right sliding operation, and so on.
  • the first user operation may also be other types of user operations such as a voice command for the area 301A.
  • the electronic device 100 displays the second preview image in the first area.
  • the second preview image is also obtained by cropping all the images collected by the first camera.
  • the position of the second preview image is different from the position of the first preview image.
  • the second preview image displayed in area 301A is compared with the first preview image, and the center of the second preview image is The position deviates from the center position of the first preview image, that is, it is no longer the center position of all the images collected by the rear camera. In this way, the user can change the viewing range presented by the rear camera in the area 301A through a sliding operation.
  • the second preview image is closer to the right boundary of all the images collected by the first camera than the first preview image.
  • the preview image displayed in the area 301A shown in FIG. 6B is closer to the right boundary of all the images captured by the rear camera than the preview image displayed in the area 301A shown in FIG. 6A.
  • the user can swipe left in the area 301A to see the images closer to the right edge of all the images collected by the rear camera, for example, let the scene on the right of all the images collected by the rear camera appear in the area 301A. middle.
  • the second preview image is closer to the left boundary of all the images collected by the first camera than the first preview image.
  • the preview image displayed in the area 301B shown in FIG. 6F is closer to the left boundary of all the images captured by the front camera than the preview image displayed in the area 301B shown in FIG. 6D.
  • the user can swipe right in the area 301B to see the images closer to the left boundary in all the images collected by the front camera, for example, let the left side scene in all the images collected by the rear camera appear in the area 301A. middle.
  • the second preview image and the first preview image may be the same size.
  • the center position of the first preview image may coincide with the center position of all the images collected by the first camera.
  • the preview image in the first area changes from the first preview image to the second preview image, but the viewing range of the preview images in other areas in the preview interface does not change . That is, when the first user operation is detected, the position of the preview image B and the position of the preview image A are the same in all the images collected by the other camera (may be referred to as the second camera) among the N cameras.
  • the preview image A is a preview image displayed in another area (which may be called the second area) before the first user operation occurs
  • the preview image B is a preview image displayed in the second area after the first user operation occurs. image.
  • the user can individually adjust the viewing range presented by a certain camera in the preview interface without affecting the viewing range presented by other cameras in the preview interface.
  • the user can also adjust the viewing range of other cameras in the preview interface.
  • the electronic device may detect user operations such as sliding left or right in another area (which may be referred to as the second area), and change the preview image displayed in the second area from the preview image C to the preview image D.
  • the position of the preview image D is different from the position of the preview image C. In this way, the user can change the viewing range presented by the second camera in the second area through user operations such as sliding left or right in the second area.
  • the electronic device 100 detects a second user operation.
  • the second user operation is a user operation instructing to start recording a video, such as a click operation on the control 303 shown in FIG. 6A.
  • the electronic device 100 starts to record a video and displays a shooting interface, which also includes the aforementioned N areas.
  • the user can also adjust the viewing range of a camera in the shooting interface through user operations such as sliding left or right.
  • the specific process is the same as the user adjusting the viewing range of the camera in the preview interface.
  • the electronic device detects a user operation (such as a user operation such as sliding left or right) acting in the first area, among all the images collected by the first camera, the position of the preview image displayed in the first area It is different from the position of the preview image previously displayed in the first area.
  • a user operation such as sliding left or right
  • the user can also adjust the viewfinder range presented by the camera in the shooting interface through user operations such as sliding left or right.
  • the electronic device may detect user operations such as sliding left or right in the first area of the shooting interface (may be referred to as a fourth user operation), and display the third preview image of the first camera in the first area.
  • the third preview image is obtained by cropping all the images collected by the first camera. In all the images collected by the first camera, the position of the third preview image is different from the position of the second preview image.
  • Stage 4 Finish recording the video and play the video file
  • the electronic device detects a user operation instructing to stop recording a video, for example, a click operation on the control 303 shown in FIG. 6A. This user operation may be referred to as the fifth user operation.
  • S111 The electronic device stops recording the video and generates a video file.
  • each frame of image in the video file includes a preview image displayed in each area.
  • the preview image displayed in each area may be spliced first.
  • the electronic device detects a user operation of opening the video file (may be referred to as a sixth user operation).
  • the electronic device displays a play interface, and the play interface also includes the aforementioned N areas.
  • the multi-channel video framing method provided by the embodiment of the present application can enable the user to adjust the framing presented by each working camera in the preview frame during multi-channel shooting, so that the framing of each working camera does not affect each other.
  • the viewfinder of a certain work camera changes at any time and the viewfinder of other work cameras also changes at any time.
  • the multi-channel video framing method provided in the embodiment of the present application may also provide a face tracking function. Specifically, when the electronic device detects that all images collected by the first camera include an image of the first human face, the electronic device may display a fifth preview image in the first area, and the fifth preview image is collected by cropping the first camera The fifth preview image may include the image of the first face. When the electronic device detects that the position of the image of the first face in all the images captured by the first camera has changed, the electronic device displays the sixth preview image in the first area, and the sixth preview image is captured by cropping the first camera The sixth preview image also includes the image of the first face.
  • the position of the image of the first human face in the sixth preview image may be the same as the position of the image of the first human face in the fifth preview image.
  • the image of the first face may be in the central area of the fifth preview image.
  • the electronic device may activate the second camera.
  • the second camera may be a wide-angle camera or an ultra-wide-angle camera, and its viewing range is larger than that of the first camera.
  • the first face is within the viewing range of the second camera.
  • the electronic device can display the seventh preview image in the first area.
  • the electronic device detects that the position of the image of the first face in all the images captured by the second camera has changed, the electronic device is in the first area.
  • the eighth preview image is displayed in one area.
  • the seventh preview image is obtained by cropping all the images collected by the second camera, and the seventh preview image includes the image of the first human face.
  • the eighth preview image is obtained by cropping all the images collected by the second camera, and the eighth preview image includes the image of the first human face.
  • the position of the image of the first human face in the seventh preview image may be the same as the position of the image of the first human face in the eighth preview image.
  • the image of the first face may be in the central area of the seventh preview image.
  • the face tracking function can be applied to front shooting scenes or rear shooting scenes. That is, the first camera may be a front camera or a rear camera.
  • the multi-channel video framing method provided in the embodiment of the present application may also provide a function of adjusting the framing under zoom.
  • the electronic device may also detect a third user operation.
  • the third user operation can be used to zoom in on the zoom magnification, such as a user operation in which two fingers change from pinching to separating.
  • the electronic device may enlarge the first preview image, and display the enlarged first preview image in the first area.
  • the size of the first preview image is the same as the size of the first area under one magnification, not all the enlarged first preview images can be displayed in the first area, and the electronic device can display it in the first area
  • a partial image of the first preview image the partial image may be in the central area of the first preview image.
  • the electronic device when the electronic device detects the first user operation, if the posture of the electronic device has not changed, the first area is displayed in the first area.
  • the second preview image of a camera that is to say, when the posture of the electronic device has not changed, the electronic device will adjust the viewing range of the camera in the preview interface according to the first user operation.
  • the electronic device may display a fourth preview image of the first camera in the first area, and the fourth The preview image may be obtained by cropping all the images collected by the first camera, and the center position of the fourth preview image coincides with the center position of all the viewfinder images of the first camera.
  • the electronic device may not adjust the viewing range of the camera in the preview interface according to the first user operation detected at this time, so that the user can change the optical viewfinder by adjusting the posture of the electronic device .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种多路录像的取景方法,在多路录像模式下,电子设备可在取景框的多个区域中分别显示来自不同摄像头的多幅图像,其中,一个区域中显示来自一个摄像头的图像,并可检测用户在某个区域中的用户操作,如左滑操作或右滑操作,从而改变相应摄像头在该区域中呈现的取景,但不改变其他摄像头在各自对应区域中的取景。这样,可使得用户在多路录像时能够分别调整各个工作摄像头在预览框中呈现的取景,可实现各个工作摄像头在预览框中的取景互不影响,不会出现因某一个工作摄像头在预览框中的取景改变而导致其他工作摄像头在预览框中的取景的也随时之改变的问题。

Description

多路录像的取景方法、图形用户界面及电子设备
本申请要求于2020年04月22日提交中国专利局、申请号为202010324919.1、申请名称为“多路录像的取景方法、图形用户界面及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及电子技术领域,特别涉及应用在多路录像的取景方法、图形用户界面及电子设备。
背景技术
目前,便携式电子设备(如手机、平板电脑等)一般都配置有多个摄像头,如前置摄像头、广角摄像头、长焦摄像头等。为了带来进一步的拍摄创作体验,越来越多的电子设备可以支持多个摄像头同时拍摄。
发明内容
本申请的目的在于提供一种多路录像的取景方法、图形用户界面(graphic user interface,GUI)及电子设备,可使得用户在多路拍摄时能够分别调整各个工作摄像头在预览框中的取景,可实现各个工作摄像头在预览框中的取景互不影响,不会出现因某一个工作摄像头在预览框中的取景改变而导致其他工作摄像头在预览框中的取景的也随时之改变的问题。
上述目标和其他目标将通过独立权利要求中的特征来达成。进一步的实现方式在从属权利要求、说明书和附图中体现。
第一方面,提供一种多路录像的取景方法,该方法应用于具有显示屏和M个摄像头的电子设备,M≥2,M为正整数,该方法包括:该电子设备开启N个摄像头,N≤M,N为正整数;该电子设备通过该N个摄像头采集图像;该电子设备显示预览界面和该N个摄像头各自采集的部分或全部图像,该预览界面包括N个区域,该N个摄像头各自采集的部分或全部图像分别显示在该N个区域中;该电子设备检测到在第一区域的第一用户操作,该第一区域为该N个区域中的一个区域,第一预览图像显示在该第一区域中,该第一预览图像是通过裁剪该第一摄像头采集的全部图像得到的;该电子设备在该第一区域中显示第二预览图像,该第二预览图像也是通过裁剪该第一摄像头采集的全部图像得到的,在该第一摄像头采集的全部图像中,该第二预览图像的位置不同于该第一预览图像的位置;该电子设备检测到第二用户操作;该电子设备开始录制视频,并显示拍摄界面,该拍摄界面包括该N个区域。
实施第一方面提供的方法,可以在多路录像的预览过程中,使得用户通过用户操作来调整各个工作摄像头在预览框中的取景,可实现各个工作摄像头在预览框中的取景互不影响。
结合第一方面,在一种可能的实现方式中,第一摄像头可以为后置摄像头,也可以为前置摄像头。具体的,第一预览图像的中心位置可以和第一摄像头采集的全部图像的中心位置重合。此时,第一预览图像是通过居中裁剪方式得到的。
结合第一方面,在一种可能的实现方式中,在第一摄像头的一倍倍率下,第一预览图像的尺寸,可以和第一区域的尺寸一样大。
结合第一方面,在一种可能的实现方式中,该第一用户操作包括滑动操作。例如左滑操作、右滑操作等等。在该第一摄像头采集的全部图像中,该第一预览图像的中心位置指向该第二预览图像的中心位置的方向与该滑动操作的滑动方向相反。这样,用户就可以通过滑动操作来改变第一摄像头在第一区域中呈现的取景范围。
具体的,如果该第一用户操作为左滑操作,则该第二预览图像相较于该第一预览图像更接近该第一摄像头采集的全部图像的右边界。
具体的,如果该第一用户操作为右滑操作,则该第二预览图像相较于该第一预览图像更接近该第一摄像头采集的全部图像的左边界。
结合第一方面,在一种可能的实现方式中,该第一预览图像中心位置与该第一摄像头采集的全部图像的中心位置重合。也即是说,电子设备可以以居中裁剪的方式裁剪第一摄像头采集的全部图像,从而得到第一预览图像。
结合第一方面,在一种可能的实现方式中,第二预览图像和第一预览图像可以一样大。也即是说,用户通过滑动操作调整摄像头的取景前后,电子设备不更改第一摄像头采集的全部图像中裁剪区域的大小。
结合第一方面,在一种可能的实现方式中,该电子设备在检测到该第一用户操作之前,还检测到第三用户操作;该电子设备将该第一预览图像放大,并在该第一区域中显示放大后的该第一预览图像。这里,该第一用户操作可以是滑动操作,第三用户操作可以是双指放大操作。这样,电子设备可以在变焦场景下单独调整某个摄像头在预览界面中呈现的取景范围,而不影响其他摄像头在预览界面中呈现的取景范围。
结合第一方面,在一种可能的实现方式中,第二用户操作为指示开始录制视频的用户操作,例如在拍摄控件上的点击操作。
结合第一方面,在一种可能的实现方式中,电子设备还可以在该拍摄界面的该第一区域中检测到第四用户操作;该电子设备在该拍摄界面的该第一区域中显示该第一摄像头的第三预览图像,该第三预览图像是通过裁剪该第一摄像头采集的全部图像得到的,在该第一摄像头采集的全部图像中,该第三预览图像的位置不同于该第二预览图像的位置。
这样,在预览界面中调整某个摄像头的取景之后,用户还可以通过用户操作接着调整该摄像头在拍摄界面中呈现的取景范围。
具体的,该第四操作可以是滑动操作。
结合第一方面,在一种可能的实现方式中,该电子设备在检测到该第一用户操作时,如果该电子设备的姿态未发生改变,则在该第一区域中显示该第一摄像头的第二预览图像;在检测到该第一用户操作时,如果该电子设备的姿态发生改变,则该电子设备在该第一区域中显示该第一摄像头的第四预览图像,该第四预览图像是通过裁剪该第一摄像头采集的全部图像得到的,该第四预览图像的中心位置与该第一摄像头的全部取景图像的中心位置 重合。
也即是说,在电子设备的姿态未发生改变时,电子设备才会根据第一用户操作去调整摄像头在预览界面中的取景范围。在检测到该第一用户操作时,如果该电子设备的姿态发生改变,电子设备可以不根据此时检测到的第一用户操作去调整摄像头在预览界面中的取景范围,以便用户通过调整电子设备姿态来改变光学取景。
结合第一方面,在一种可能的实现方式中,该电子设备可以检测到该第一摄像头采集的全部图像中包括第一人脸的图像;该电子设备在该第一区域中显示第五预览图像,该第五预览图像是通过裁剪该第一摄像头采集的全部图像得到的,该第五预览图像包括该第一人脸的图像;该电子设备检测到该第一人脸的图像在该第一摄像头采集的全部图像中的位置发生改变;该电子设备在该第一区域中显示第六预览图像,该第六预览图像是通过裁剪该第一摄像头采集的全部图像得到的,该第六预览图像包括该第一人脸的图像。也即是说,本申请实施例提供的多路录像的取景方法还可以提供人脸追踪功能,使得预览界面中的某个区域总是显示包含人脸的预览图像。
在一些实施例中,该第一人脸的图像在该第六预览图像中的位置同于该第一人脸的图像在该第五预览图像中的位置。
在一些实施例中,该第一人脸的图像在该第五预览图像的中心区域。
结合第一方面,在一种可能的实现方式中,该电子设备还可以检测到该第一摄像头采集的全部图像中包括第一人脸的图像;并启动第二摄像头,该第二摄像头的取景范围大于该第一摄像头的取景范围,该第一人脸在该第二摄像头的取景范围内;该电子设备在该第一区域中显示第七预览图像,该第七预览图像是通过裁剪该第二摄像头采集的全部图像得到的,该第七预览图像包括该第一人脸的图像;该电子设备检测到该第一人脸的图像在该第二摄像头采集的全部图像中的位置发生改变;该电子设备在该第一区域中显示第八预览图像,该第八预览图像是通过裁剪该第二摄像头采集的全部图像得到的,该第八预览图像包括该第一人脸的图像。这样可以在人脸追踪时扩大某个预览区域对应的取景范围。
在一些实施例中,该第一人脸的图像在该第七预览图像中的位置同于该第一人脸的图像在该第八预览图像中的位置。
在一些实施例中,该第一人脸的图像在该第七预览图像的中心区域。
结合第一方面,在一种可能的实现方式中,该第一摄像头为前置摄像头或后置摄像头。也即是说,本申请实施例提供的人脸追踪功能、滑动调整摄像头的取景范围等功能,可以适用前置拍摄场景,也可以适用后置拍摄场景。
结合第一方面,在一种可能的实现方式中,该电子设备还可以检测到第五用户操作;该电子设备停止录制视频,并生成视频文件;该电子设备检测到针对该视频文件的第六用户操作;该电子设备显示播放界面,该播放界面包括该N个区域。这样可以使得用户根据自身需求调整各区域的预览图像后,保存想要的预览图像,可以使得用户获得更加灵活方便的录像体验。
第五用户操作为指示停止录制视频的用户操作,例如可以是作用于拍摄控件上的点击操作。
第二方面,本申请实施例提供了一种多路录像的取景方法,应用于具有显示屏和M个 摄像头的电子设备,M≥2,M为正整数,该方法包括:该电子设备开启N个摄像头,N≤M,N为正整数;该电子设备通过该N个摄像头采集图像;该电子设备显示预览界面和该N个摄像头各自采集的部分或全部图像,该预览界面包括N个区域,该N个摄像头各自采集的部分或全部图像分别显示在该N个区域中;该电子设备检测到在第一区域的第七用户操作;该电子设备检测到该电子设备的姿态发生改变;该电子设备在该第一区域中显示第九预览图像,该第九预览图像呈现的取景范围和第十预览图像呈现的取景范围相同,该第十预览图像为在该电子设备的姿态发生改变之前显示在该第一区域中的图像,该第九预览图像是在该电子设备的姿态发生改变之后通过裁剪该第一摄像头采集到的全部图像得到的,该第十预览图像是在该电子设备的姿态发生改变之前通过裁剪该第一摄像头采集到的全部图像得到的;该电子设备检测到第八用户操作;该电子设备开始录制视频,并显示拍摄界面,该拍摄界面包括该N个区域。
该第七用户操作可以为选中该第一区域的用户操作,例如作用于该第一区域上的双击操作、长按操作等等。
第二方面提供的方法可以在电子设备的姿态发生改变的情况下,不影响该被选中的预览区域的取景范围。
第三方面,本申请实施例提供了一种多路拍照的取景方法,该方法应用于具有显示屏和M个摄像头的电子设备,M≥2,M为正整数,该方法包括:该电子设备开启N个摄像头,N≤M,N为正整数;该电子设备通过该N个摄像头采集图像;该电子设备显示预览界面和该N个摄像头各自采集的部分或全部图像,该预览界面包括N个区域,该N个摄像头各自采集的部分或全部图像分别显示在该N个区域中;该电子设备检测到在第一区域的第一用户操作,该第一区域为该N个区域中的一个区域,第一预览图像显示在该第一区域中,该第一预览图像是通过裁剪该第一摄像头采集的全部图像得到的;该电子设备在该第一区域中显示第二预览图像,该第二预览图像也是通过裁剪该第一摄像头采集的全部图像得到的,在该第一摄像头采集的全部图像中,该第二预览图像的位置不同于该第一预览图像的位置。
实施第二方面提供的方法,可以在多路拍照的预览过程中,使得用户通过用户操作来调整各个工作摄像头在预览框中的取景,可实现各个工作摄像头在预览框中的取景互不影响。
第三方面,还提供一种电子设备,该电子设备可包括M个摄像头、显示屏、触摸传感器、无线通信模块、存储器、以及一个或多个处理器,上述一个或多个处理器用于执行存储在上述存储器中的一个或多个计算机程序,其中:M≥2,M为正整数,
N个摄像头用于采集图像;
显示屏可用于显示预览界面和该N个摄像头各自采集的部分或全部图像,该预览界面包括N个区域,该N个摄像头各自采集的部分或全部图像分别显示在该N个区域中;
触摸传感器可用于检测到在第一区域的第一用户操作,该第一区域为该N个区域中的一个区域,第一预览图像显示在该第一区域中,该第一预览图像是通过裁剪该第一摄像头采集的全部图像得到的;
显示屏可用于响应该第一用户操作,在该第一区域中显示第二预览图像,该第二预览图像也是通过裁剪该第一摄像头采集的全部图像得到的,在该第一摄像头采集的全部图像 中,该第二预览图像的位置不同于该第一预览图像的位置;
触摸传感器还可用于检测到第二用户操作;
N个摄像头可用于响应第二用户操作开始录制视频,显示屏可用于响应第二用户操作,显示拍摄界面,该拍摄界面包括该N个区域。
第三方面中电子设备包括的各个部件的具体实现方式可以参考第一方面描述的方法,这里不再赘述。
第四方面,还提供一种电子设备,该电子设备可包括一种装置,该装置可实现如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
第五方面,还提供一种录像装置,该装置具有实现上述方法实际中电子设备行为的功能。上述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。上述硬件或软件包括一个或多个与上述功能相对应的模块。
第六方面,提供一种计算机设备,包括存储器,处理器以及存储在上述存储器上并可在上述处理器上运行的计算机程序,其特征在于,上述处理器执行上述计算机程序时使得上述计算机设备实现如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
第七方面,一种包含指令的计算机程序产品,其特征在于,当上述计算机程序产品在电子设备上运行时,使得上述电子设备执行如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
第八方面,提供一种计算机可读存储介质,包括指令,其特征在于,当上述指令在电子设备上运行时,使得上述电子设备执行如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例中所需要使用的附图进行说明。
图1为一实施例提供的电子设备的结构的示意性图;
图2A为一实施例提供的电子设备上的用于应用程序菜单的用户界面的示意图;
图2B为一实施例提供的电子设备上的后置摄像头的示意图;
图3A-图3D为本申请涉及的双路录像场景的示意图;
图4A为双路录像的工作原理的示意图;
图4B为现有双路录像的图像裁剪的示意图;
图5为多路录像的场景示意图;
图6A-图6B、图6D-图6E为一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整的UI示意图;
图6C、图6F为一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整时,裁剪图像的示意图;
图7A-图7B、图7D-图7E为另一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整的UI示意图;
图7C、图7F为另一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整时,裁剪图像的示意图;
图8A、图8B为另一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整时,裁剪图像的示意图;
图9A-图9C为另一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整的UI示意图;
图9D、图9E为另一实施例提供的在多路录像的预览过程中对各区域中显示的预览图像进行调整时,裁剪图像的示意图;
图10A、图10B为一实施例提供的在多路录像的预览过程中提示用户各区域中显示的预览图像所在位置的UI示意图;
图11A-图11F为一实施例提供的在多路录像的录像过程中对各区域中显示的预览图像进行调整的UI示意图;
图12A-图12B为一实施例提供的在多路录像的预览过程中,移动电子设备以对各区域中显示的预览图像进行调整的UI示意图;
图13A-图13B为一实施例提供的在多路拍照的预览过程中对各区域中显示的预览图像进行调整的UI示意图;
图14为一实施例提供的电子设备的部分软硬件的协作示意图;
图15为一实施例提供的多路录像的取景方法的流程示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,本申请中使用的术语“和/或”是指并包含一个或多个所列出项目的任何或所有可能组合。
本申请提供了一种多路录像的取景方法,可以应用于包括多个摄像头的电子设备。该电子设备可以同时采用多个摄像头进行拍照或录像,获得多路图像和更为丰富的画面信息。而且,该电子设备还可以支持用户在多路拍照或录像时分别调整各个工作摄像头在其对应的预览区域中的取景,可实现各个工作摄像头在各自对应的预览区域中的取景互不影响,不会出现因某一个工作摄像头在对应预览区域中的取景改变而导致其他工作摄像头在对应预览区域中的取景也随时之改变的问题。
一个摄像头的取景范围(又称视场,FOV)由该摄像头的光学***的设计决定,例如广角摄像头具有较大的取景范围。用户可通过移动电子设备来调整摄像头的取景。本申请实施例中,一个摄像头在其相应预览区域中的取景可以通过作用在该预览区域中的用户操作(如左右滑动操作)来调整。一个摄像头在其相应预览区域中的取景,即为该对应的预览区域中显示的内容。
一个摄像头在对应的预览区域用于显示来该摄像头的部分或全部图像。一个摄像头在对应的预览区域中显示的预览图像具体为该摄像头的采集的图像中的某个裁剪区域中的图 像,即该预览区域中显示的预览图像是通过裁剪该摄像头的采集的图像得来。
在本申请实施例中,多路拍摄可以包括多路录像、多路拍照。该电子设备可以提供2种多路拍摄模式:多路录像模式、多路拍照模式。
其中,多路录像模式可以是指,该电子设备中的多个摄像头,例如前置摄像头和后置摄像头,可以同时录制多路视频。在多路录像模式下,在录像预览或者录像过程中或者在对已录制视频的播放过程中,显示屏可以在同一界面上同时显示来自这多个摄像头的多幅图像。这多幅图像可以在同一界面上拼接显示,或者以画中画的方式显示。关于该显示方式,后续实施例会详细说明。另外,在多路录像模式下,这多幅图像可以保存为图库(又可称为相册)中的多个视频,或者这多个视频拼接成的合成视频。
其中,“录像”也可以称为“录制视频”。在本申请以下实施例中,“录像”和“录制视频”表示相同的含义。
其中,多路拍照模式可以是指,该电子设备中的多个摄像头,例如前置摄像头和后置摄像头,可以同时拍摄多张图片。在多路拍照模式下,在拍照预览时,显示屏可以在取景框(又可称为预览框)中同时显示来自这多个摄像头的多帧图像。这多帧图像可以在取景框中拼接显示,或者以画中画的方式显示。另外,在多路拍照模式下,这多帧图像可以保存为图库(又可称为相册)中的多张图片,或者这多帧图像拼接成的合成图像。
在本申请实施例中,预览框中所显示的来自摄像头的图像,是通过裁剪该摄像头采集的图像得来。裁剪的方式可参考后续实施例的描述。
“多路拍照模式”、“多路录像模式”只是本申请实施例所使用的一些名称,其代表的含义在本申请实施例中已经记载,其名称并不能对本实施例构成任何限制。
首先,介绍本申请实施例提供的电子设备。
该电子设备以是手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personaldigital assistant,PDA)或专门的照相机(例如单反相机、卡片式相机)等,本申请对该电子设备的具体类型不作任何限制。
图1示例性示出了该电子设备的结构。如图1所示,电子设备100可具有多个摄像头193,例如前置摄像头、广角摄像头、超广角摄像头、长焦摄像头等。此外,电子设备100还可包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本 申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processingunit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
在一些实施例中,控制器或GPU等处理器110,可以用于在多路拍摄场景下,将多个摄像头193同时采集到的多帧图像,通过拼接或局部叠加等方式合成显示于取景框中的预览图像,以便电子设备100可以同时显示这多个摄像头193采集到的图像。
在另一些实施例中,控制器或GPU等处理器110,还可以用于在多路拍摄场景下,对每个摄像头193采集到的图像进行防抖处理后,再将多个摄像头193对应的防抖处理后的图像进行合成。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuitsound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purposeinput/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部 存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。
在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wirelesslocal area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(codedivision multiple access,CDMA),宽带码分多址(wideband code division multipleaccess,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidounavigation satellite system,BDS),准天顶卫星***(quasi-zenith satellitesystem,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用 以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备 100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持一个或多个SIM卡接口。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时***多张卡。多张卡的类型可以相同,也可以不同。SIM卡接口195也 可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。不限于集成于处理器110中,ISP也可以设置在摄像头193中。
在本申请实施例中,摄像头193的数量可以为M个,M≥2,M为正整数。电子设备100在多路拍摄中开启的摄像头的数量可以为N,N≤M,N为正整数。电子设备100在多路拍摄中开启的摄像头,也可以称为工作摄像头。
摄像头193包括镜头和感光元件(又可称为图像传感器),用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号,如标准的RGB,YUV等格式的图像信号。
摄像头193的硬件配置以及物理位置可以不同,因此,不同摄像头采集到的图像的大小、范围、内容或清晰度等可能不同。
摄像头193的出图尺寸可以不同,也可以相同。摄像头的出图尺寸是指该摄像头采集到的图像的长度与宽度。该图像的长度和宽度均可以用像素数来衡量。摄像头的出图尺寸也可以被叫做图像大小、图像尺寸、像素尺寸或图像分辨率。常见的摄像头的出图比例可包括:4:3、16:9或3:2等等。出图比例是指摄像头所采集图像在长度上和宽度上的像素数的大致比例。
摄像头193可以对应同一焦段,也可以对应不同的焦段。该焦段可以包括但不限于:焦长小于预设值1(例如20mm)的第一焦段;焦长大于或者等于预设值1,且小于或者等于预设值2(例如50mm)的第二焦段;焦长大于预设值2的第三焦段。对应于第一焦段的摄像头可以被称为超广角摄像头,对应第二焦段的摄像头可以被称为广角摄像头,对应于第三焦段的摄像头可以被称为长焦摄像头。摄像头对应的焦段越大,该摄像头的视场角(field of view,FOV)越小。视场角是指光学***所能够成像的角度范围。
摄像头193可以设置于电子设备的两面。和电子设备的显示屏194位于同一平面的摄像头可以被称为前置摄像头,位于电子设备的后盖所在平面的摄像头可以被称为后置摄像头。前置摄像头可用于采集面对显示屏194的拍摄者自己的图像,后置摄像头可用于采集拍摄者所面对的拍摄对象(如人物、风景等)的图像。
在一些实施例中,摄像头193可以用于采集深度数据。例如,摄像头193可以具有(time of flight,TOF)3D感测模块或结构光(structured light)3D感测模块,用于获取深度信息。用于采集深度数据的摄像头可以为前置摄像头,也可为后置摄像头。
视频编解码器用于对数字图像压缩或解压缩。电子设备100可以支持一种或多种图像编解码器。这样,电子设备100可以打开或保存多种编码格式的图片或视频。
电子设备100可以通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emittingdiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrixorganic light emitting diode的,AMOLED),柔性发光二极管(flex light-emittingdiode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot lightemitting diodes,QLED)等。在一些实施例中,电子设备100可以包括一个或多个显示屏194。
在一些实施例中,在多路拍摄场景下,显示屏194可以通过拼接或画中画等方式对来自多个摄像头193多路图像进行显示,以使得来自该多个摄像头193的多路图像可以同时呈现给用户。
在一些实施例中,在多路拍摄模式下,处理器110(例如控制器或GPU)可以对来自多个摄像头193的多帧图像进行合成。例如,将来自多个摄像头193的多路视频流合并为一路视频流,处理器110中的视频编码器可以对合成的一路视频流数据进行编码,从而生成一个视频文件。这样,该视频文件中的每一帧图像可以包含来自多个摄像头193的多个图像。在播放该视频文件的某一帧图像时,显示屏194可以显示来自多个摄像头193的多路图像,以为用户展示同一时刻或同一场景下,不同范围、不同清晰度或不同细节信息的多个图像画面。
在一些实施例中,在多路拍摄模式下,处理器110可以分别对来自不同摄像头193的图像帧进行关联,以便在播放已拍摄的图片或视频时,显示屏194可以将相关联的图像帧同时显示在取景框中。该种情况下,不同摄像头193同时录制的视频可以分别存储为不同的视频,不同摄像头193同时录制的图片可以分别存储为不同的图片。
在一些实施例中,在多路录像模式下,多个摄像头193可以采用相同的帧率分别采集图像,即多个摄像头193在相同时间内采集到的图像帧的数量相同。来自不同摄像头193的视频可以分别存储为不同的视频文件,该不同视频文件之间相互关联。该视频文件中按照采集图像帧的先后顺序来存储图像帧,该不同视频文件中包括相同数量的图像帧。在播放已录制的视频时,显示屏194可以根据预设的或用户指示的布局方式,按照相关联的视频文件中包括的图像帧的先后顺序进行显示,从而将不同视频文件中同一顺序对应的多帧图像显示在同一界面上。
在一些实施例中,在多路录像模式下,多个摄像头193可以采用相同的帧率分别采集图像,即多个摄像头193在相同时间内采集到的图像帧的数量相同。处理器110可以分别为来自不同摄像头193的每一帧图像打上时间戳,以便在播放已录制的视频时,显示屏194可以根据时间戳,同时将来自多个摄像头193的多帧图像显示在同一界面上。
其中,电子设备在预览框中显示的来自摄像头的图像,是通过裁剪该摄像头采集的图 像得来。电子设备裁剪摄像头采集的图像的方式,可参考后续实施例的描述。
为使用方便,电子设备通常在用户的手持模式下进行拍摄,而用户手持模式下通常会使得拍摄获得的画面发生抖动。在一些实施例中,在多路拍摄模式下,处理器110可以分别对不同摄像头193采集到图像帧分别进行防抖处理。而后,显示屏194再根据防抖处理后的图像进行显示。
下面介绍电子设备100上的用于应用程序菜单的示例性用户界面。
图2A示例性示出了电子设备100上的用于应用程序菜单的示例性用户界面21。如图2A所示,电子设备100可以配置有多个摄像头193,这多个摄像头193可包括前置摄像头和后置摄像头。其中,前置摄像头也可以是多个,例如前置摄像头193-1、前置摄像头193-2。如图2A所示,前置摄像头193-1、前置摄像头193-2可设置于电子设备100的顶端,如电子设备100的“刘海”位置(即图2A中示出的区域AA)。可以知道,区域AA中除了包括摄像头193之外,还可以包括照明器197(未在图1中示出)、扬声器170A、接近光传感器180G、环境光传感器180L等。在一些实施例中,如图2B所示,电子设备100的背面也可以配置有后置摄像头193,以及照明器197。后置摄像头193可以是多个,例如后置的广角摄像头193-3、后置的超广角摄像头193-4、后置的长焦摄像头193-5。
如图2A所示,用户界面21可包括:状态栏201,具有常用应用程序图标的托盘223,日历指示符203,天气指示符205,导航栏225,以及其他应用程序图标。其中:
状态栏201可包括:移动通信信号(又可称为蜂窝信号)的一个或多个信号强度指示符201-1、移动通信信号的运营商的指示符201-2、时间指示符201-3、电池状态指示符201-4等。
日历指示符203可用于指示当前时间,例如日期、星期几、时分信息等。
天气指示符205可用于指示天气类型,例如多云转晴、小雨等,还可以用于指示气温等信息。
具有常用应用程序图标的托盘223可展示:电话图标223-1、短消息图标223-2、联系人图标221-4等。
导航栏225可包括:返回按键225-1、主界面(Gome screen)按键225-3、呼出任务历史按键225-5等***导航键。当检测到用户点击返回按键225-1时,电子设备100可显示当前页面的上一个页面。当检测到用户点击主界面按键225-3时,电子设备100可显示主界面。当检测到用户点击呼出任务历史按键225-5时,电子设备100可显示用户最近打开的任务。各导航键的命名还可以为其他,本申请对此不做限制。不限于虚拟按键,导航栏225中的各导航键也可以实现为物理按键。
其他应用程序图标可例如:微信(Wechat)的图标211、QQ的图标212、推特(Twitter)的图标213、脸书(Facebook)的图标214、邮箱的图标215、云共享的图标216、备忘录的图标217、设置的图标218、图库的图标219、相机的图标220。用户界面21还可包括页面指示符221。其他应用程序图标可分布在多个页面,页面指示符221可用于指示用户当前浏览的是哪一个页面中的应用程序。用户可以左右滑动其他应用程序图标的区域,来浏览其他页面中的应用程序图标。当检测到用户点击这些应用程序图标时,电子设备100可 显示该应用程序的用户界面。
在一些实施例中,图2A示例性所示的用户界面21可以为主界面(Gome screen)。
在其他一些实施例中,电子设备100还可以包括主屏幕键。该主屏幕键可以是实体按键,也可以是虚拟按键(如按键225-3)。该主屏幕键可用于接收用户的指令,将当前显示的UI返回到主界面,这样可以方便用户随时查看主屏幕。
可以理解的是,图2A仅仅示例性示出了电子设备100上的用户界面,不应构成对本申请实施例的限定。
下面分别描述本申请涉及的一种典型拍摄场景:双路录像场景。
如图3A所示,电子设备可以检测到作用于相机的图标220的触控操作(如在图标220上的点击操作),响应于该操作,可以显示图3B示例性所示的用户界面31。用户界面31可以是“相机”应用程序的默认拍照模式的用户界面,可用于用户通过默认后置摄像头进行拍照。“相机”是智能手机、平板电脑等电子设备上的一款图像拍摄的应用程序,本申请对该应用程序的名称不做限制。也即是说,用户可以点击图标220来打开“相机”的用户界面31。不限于此,用户还可以在其他应用程序中打开用户界面31,例如用户在“微信”中点击拍摄控件来打开用户界面31。“微信”是一款社交类应用程序,可支持用户向他人分享所拍摄的照片等。
图3B示例性示出了智能手机等电子设备上的“相机”应用程序的一个用户界面31。如图3B所示,用户界面31可包括:区域301、拍摄模式列表302、控件303、控件304及控件305。其中:
区域301可以称为预览框301或取景框301。预览框301可用于显示摄像头193实时采集的图像。电子设备可以实时刷新其中的显示内容,以便于用户预览摄像头193当前的采集的图像。
拍摄模式列表302中可以显示有一个或多个拍摄模式选项。这一个或多个摄像选项可以包括:拍照模式选项302A、录像模式选项302B、多路拍照模式选项302C、多路录像模式选项302D以及更多选项302E。这一个或多个摄像选项在界面上可以表现为文字信息,例如“拍照”、“录像”、“多路拍照”、“多路录像”“更多”。不限于此,这一个或多个摄像选项在界面上还可以表现为图标或者其他形式的交互元素(interactive element,IE)。
控件303可用于监听触发拍摄(拍照或录像)的用户操作。电子设备可以检测到作用于控件303的用户操作(如在控件303上的点击操作),响应于该操作,电子设备100可以将将预览框301中的图像保存为“图库”中的图片。当用户切换到录像模式时,控件303可以更改为控件901,电子设备可以检测到的作用于控件901的用户操作(如在控件901上的点击操作),响应于该操作,电子设备100可以将将预览框301中的图像保存为“图库”中的视频。这里,“图库”是智能手机、平板电脑等电子设备上的一款图片管理的应用程序,又可以称为“相册”,本实施例对该应用程序的名称不做限制。“图库”可以支持用户对存储于电子设备上的图片进行各种操作,例如浏览、编辑、删除、选择等操作。另外,电子设备100还可以在控件304中显示所保存的图像的缩略图。也即是说,用户可以点击控件303或控件901来触发拍摄。其中,控件303、控件901可以是按钮或者其他形式的控件。 本申请中,可以将控件303称为拍照控件,将控件901称为录像控件。控件303和控件901可以被统称为拍摄控件。
控件305可用于监听触发翻转摄像头的用户操作。电子设备100可以检测到作用于控件305的用户操作(如在控件305上的点击操作),响应于该操作,电子设备100可以翻转摄像头,例如将后置摄像头切换为前置摄像头。此时,如图3C所示,预览框301中显示前置摄像头采集的图像。
电子设备100可以检测到作用于拍摄模式选项上的用户操作,该用户操作可用于选择拍摄模式,响应该操作,电子设备100可以开启用户选择的拍摄模式。特别的,当该用户操作作用于更多拍摄模式选项302E时,电子设备100可以进一步显示更多的其他拍摄模式选项,如慢动作拍摄模式选项等等,可以向用户展示更丰富的摄像功能。不限于图3B所示,拍摄模式列表302中可以不显示更多拍摄模式选项302E,用户可以通过在拍摄模式列表302中向左/右滑动来浏览其他拍摄模式选项。
可以看出,用户界面31可向用户展示“相机”所提供的多种摄像功能(模式),用户可以通过点击拍摄模式选项来选择开启相应的拍摄模式。
举例说明,当检测到选择多路录像模式302D的用户操作(如点击操作)时,电子设备100可以显示图3D示例性所示的用户界面,其中,预览框301中同时显示有来自前置摄像头和后置摄像头的图像。在一些实施例中,电子设备100可以在启动“相机”后默认开启多路录像模式。不限于此,电子设备100还可以通过其他方式开启多路录像模式,例如电子设备100还可以根据用户的语音指令开启多路录像模式,本申请实施例对此不作限制。
可以看出,和拍照模式或录像模式相比,多路录像模式下的预览框301中同时显示有来自多个摄像头的图像。预览框301包括两个预览区域:301A和301B,301A中显示来自后置摄像头的图像,301B中显示来自前置摄像头的图像。
以双路录像为例,下面结合图4A说明双路录像的原理。如图4A所示,假设参与双路录像的前置摄像头和后置摄像头都按照16:9的比例出帧(出帧规格与普通的拍照模式一致),ISP将摄像头输出的图像帧处理为标准格式(例如YUV)的图像,并将前置摄像头输出的图像帧裁剪为区域301A所需要的比例(例如10.5:9),将后置摄像头输出的图像帧裁剪为区域301B所需要的比例(例如9:9)。之后,ISP将输出的图像传输至HAL层,由HAL层对其做电子图像防抖(eectronic image stabilization,EIS)处理后,由图像处理模块对其两路图像进行拼接。之后,显示屏可以显示该拼接的图像。其中,图像处理模块可包括电子设备100中的图像处理器、视频编解码器、数字信号处理器等等。
在此过程中,显示屏还可以监听变焦事件,并将变焦倍数传递给ISP和对应的摄像头。显示屏还可以监听切换摄像头的事件,并将该事件传递给对应的摄像头。
参考图4B,图4B示出了一种ISP按照居中裁剪的方式对摄像头输出的图像做处理的示意图。如图4B所示,电子设备的后置摄像头采集到图像a。电子设备对图像a进行裁剪以得到裁剪区域中的图像a1。该裁剪区域以图像a的中心点O点为中心,且和区域301A的比例、尺寸均相同,即该裁剪区域为图中虚线框所在区域。图像a显示于区域301A中,即图像a为区域301A中的预览图像。
类似的,电子设备的前置摄像头采集到图像b。电子设备对图像b进行裁剪以得到裁剪区域中的图像b1。该裁剪区域以图像b的中心点O点为中心,且和区域301B的比例、尺寸均相同,即该裁剪区域为图中虚线框所在区域。图像b显示于区域301B中,即图像b为区域301B中的预览图像。
用户使用电子设备进行多路拍摄时,可以通过移动电子设备来改变其中一个摄像头在其对应的预览区域中的取景。但是,此时,移动电子设备,即电子设备姿态的变化,会造成其他摄像头在对应的预览区域的取景也随之变化,这种变化可能是用户不需要或意想不到的。移动电子设备来改变其中一个摄像头在其对应的预览区域中的取景时,不能保证其他摄像头在对应的预览区域的取景不变。也就是说,用户无法兼顾多路拍摄时各个摄像头在对应预览区域的取景。
不限于上述各个UI实施例示例性示出的双路拍摄,电子设备还可以进入更多路的拍摄模式中。
参考图5,图5示例性示出了电子设备100开启“多路录像模式”后,在4路录像时所显示的用户界面。如图5所示,该用户界面的预览框可以分为四个区域:区域301A-区域301D。各个区域可用于显示来自不同摄像头的图像。例如,区域301A可用于显示来自后置的广角摄像头193-3的图像,区域301B可用于显示来自后置的超广角摄像头193-4的图像,区域301C可用于显示来自后置的长焦摄像头193-5的图像,区域301D可用于显示来自前置摄像头193-1的图像。
基于上述图像拍摄场景,下面以双路拍摄为例,介绍电子设备100上实现的用户界面(user interface,UI)的一些实施例。
首先,描述“多路录像模式”的用户界面。
在一些实施例中,电子设备100可以在启动“相机”后默认自动进入“多路录像模式”。在另一些实施例中,电子设备100启动“相机”后,若未进入“多路录像模式”,则可以响应于检测到的用户操作进入“多路录像模式”。示例性地,电子设备100可以检测到作用于图3B或图3C所示用户界面31中的多路录像模式选项302D上的触控操作(例如点击操作),并响应于该操作进入“多路录像模式”。不限于此,电子设备100还可以通过其他方式进入“多路录像模式”,例如电子设备100还可以根据用户的语音指令进入“多路录像模式”等,本申请实施例对此不作限制。
图6A示例性示出了电子设备100进入“多路录像模式”后所显示的预览界面41。如图6A所示,预览界面41包括:预览框301、拍摄模式列表302、控件901、控件304、控件305。其中:拍摄模式列表302、控件304、控件305可以参考用户界面31中的相关描述,这里不再赘述。如图6A所示,多路录像模式选项302C被选定。其中,控件901可用于监听触发录像的用户操作。
电子设备100进入“多路录像模式”后,可以使用N(例如2)个摄像头采集图像,并在显示屏中显示预览界面。该预览界面中显示有N个摄像头各自的部分或全部图像。
电子设备100进入“多路录像模式”后,预览框301可以包括N个区域,一个区域对 应于N个摄像头中的一个摄像头。不同区域分别用于显示来自对应摄像头的部分或全部图像。
预览框301中包括的各个区域在预览框301中的位置、各个区域在预览框301中所占用的尺寸/大小、各个区域分别对应的摄像头,可以被统称为多路录像时的布局方式。在一些实施例中,预览框301中包括的各个区域互不重叠且共同拼接为该预览框301,即电子设备100可以以拼接的方式显示来自N个摄像头的图像。在另一些实施例中,预览框301中包括的各个区域可以有重叠,即电子设备100可以以悬浮或叠加的方式显示来自N个摄像头的图像。
例如,N为2时,参考图6A,图6A所示的多路录像时的布局方式可以是:预览框301左右平分为区域301A和301B,区域301A对应显示来自后置的广角摄像头193-3的图像,区域301B对应显示来自前置摄像头193-1的图像。举例说明,假设电子设备的显示屏的尺寸(即屏幕分辨率)为2340*1080,比例为19.5:9;则区域301A的尺寸可以为1248*1080,比例为10.5:9,其中1248和1080分别为区域301A在长度和宽度上的像素数;区域301B的尺寸可以为1088*1080,比例为10.5:9,其中1088和1080分别为区域301A在长度和宽度上的像素数。可知,区域301A和区域302A拼接后的总区域的比例为19.5:9,和预览框301的比例相同。即,区域301A和区域302A拼接后形成的区域布满显示屏的显示区域。如图6A所示,区域301A中的图像为拍摄者所面对的拍摄对象(如人物、风景等)的图像,区域301B中的图像为面对显示屏194的拍摄者自己的图像。
又例如,N为2时,多路录像时的布局方式可以是:预览框301包括区域1和区域2,区域1占用预览框301的全部,区域2位于预览框301的右下角并占用预览框301的四分之一;区域1对应显示来自后置的超广角摄像头193-4的图像,区域2对应显示来自后置的广角摄像头193-3的图像。
又例如,N为3时,多路录像时的布局方式可以是:预览框301左中右平分为3个区域,一个区域对应显示来自后置的长焦摄像头193-5的图像,一个区域对应显示来自后置的超广角摄像头193-4的图像,一个区域对应显示来自前置摄像头193-1的图像。
可理解的,根据预览框301中各个区域的形状、大小以及位置等不同情况进行组合,多路录像时的布局方式可以有多种,这里不再一一列举。
电子设备100进入“多路录像模式”后,默认使用的用于多路录像的摄像头数量N和布局方式,可以是电子设备100预先设置的,也可以是用户自主设置的,还可以是用户在最近一次“多路录像模式”中所使用的摄像头数量和布局方式。
在一些实施例中,电子设备100进入“多路录像模式”后,还可以在预览界面中显示用于用户更改摄像头数量和布局方式的控件。电子设备100可以响应于作用于该控件的触控操作(例如点击操作),显示用于设置或者更改“多路录像模式”中所使用的摄像头数量和布局方式的设置界面。用户可以通过该设置界面设置或者更改“多路录像模式”中所使用的摄像头数量和布局方式。本申请实施例对该设置界面的具体实现不作限制。
在另一些实施例中,电子设备100进入“多路录像模式”后,还可以响应于在预览界面中的用于切换摄像头的控件上的触控操作(例如点击操作),更改布局方式中对应于某个区域的摄像头。示例性地,用户可以点击如图6A中的控件304,将区域301A对应的摄像 头由后置广角摄像头193-3,更改为后置的超广角摄像头193-4。在一些实施例中,预览界面中的各个区域均可以包含对应的用于切换摄像头的控件,电子设备100可以响应于在预览区域中的该用于切换摄像头的控件上的触控操作,更改对应于该预览区域的摄像头。
在一些实施例中,电子设备100进入“多路录像模式”后,还可以在预览界面的各个区域中显示该区域对应的摄像头的标识,用于提示用户各个区域所显示的图像的来源。摄像头的标识可以实现为文字、图标或其他形式。
电子设备100进入“多路录像模式”后,预览框301的各个区域中显示的预览图像,为N个摄像头各自的部分或全部图像。
在一些实施例中,电子设备100开启“多路录像模式”后,预览框301的各个区域中显示的预览图像,可以是电子设备100对对应摄像头的采集的图像进行裁剪后所得到的。即预览区域显示对应摄像头的部分预览图像。该裁剪方式例如可以是居中剪裁或者其他的裁剪方式,本申请对此不做限制。电子设备裁剪不同摄像头的采集的图像的方式可以不同。电子设备裁剪N个摄像头的采集的图像的方式,可以是电子设备预先设置的,也可以是用户自主设置的,还可以是电子设备在最近一次“多路录像模式”中所使用的裁剪方式。
居中裁剪是指电子设备100以摄像头的采集的图像的中心为中心,从该图像中裁剪到和对应区域的尺寸相同的一部分图像。
示例性地,参考图4B,后置的广角摄像头193-3采集到的图像为a,区域301A中所显示的预览图像是电子设备100以图像a的中心为中心,裁剪出的和区域301A的尺寸相同的一部分。类似的,参考图6F,前置摄像头193-1采集到的图像为b,区域301B中所显示的图像是电子设备100以图像b的中心为中心,裁剪出的和区域301B的尺寸相同的一部分。
在另一些实施例中,若摄像头的采集的图像的尺寸和其对应的预览区域的尺寸相同,则电子设备100可以直接将该图像显示到该区域中,无需进行裁剪。即,该预览区域显示该摄像头的采集的图像。
在本申请以下实施例中,假设电子设备100开启“多路录像模式”后,区域301A中的预览图像为,电子设备以O点为中心且为第一尺寸的裁剪区域,从区域301A对应的摄像头的采集的图像中裁剪得到的图像,进行说明。即,电子设备100开启“多路录像模式”后,区域301A对应的摄像头的采集的图像中裁剪区域的中心为O点,且为第一尺寸。
图6A-图6F、图7A-图7F、图8A-图8B、图9A-图9E示例性示出了电子设备100进入“多路录像模式”后,调整预览界面中各个区域中显示的预览图像的实施例。
需要注意的是,在图6A-图6F、图7A-图7F、图8A-图8B、图9A-图9E示出的实施例中,电子设备100的姿态未发生改变。也就是说,用户在调整电子设备的预览界面中各个区域中显示的预览图像时,并未移动该电子设备100。这样可以使得用户在调整一个摄像头在其对应的预览区域中的取景时,保证其他摄像头在对应的预览区域的取景不变。也就是说,可以使得用户兼顾多路拍摄时各个摄像头在对应预览区域的取景。
此外,虽然电子设备100的姿态并未发生变化,但是外界的环境可能发生变化。也即是说,电子设备100的摄像头可以采集实时更新的图像。
在一些实施例中,电子设备100进入“多路录像模式”后,可以在非变焦场景下调整工作摄像头在预览区域中的取景。
图6A-图6F示例性示出了电子设备100在非变焦场景下调整工作摄像头在预览区域中的取景的方式。
参考图6A,电子设备100可以检测到作用于区域301A的滑动操作(例如水平向左的滑动操作)。参考图6B,电子设备100可以响应于该滑动操作更新图像a中的裁剪区域,从而刷新区域301A中所显示的预览图像。图像a为区域301A对应的摄像头的采集的图像。
更新后的图像a中裁剪区域的中心为图像a的O1点,尺寸为第二尺寸。第二尺寸等于第一尺寸。
其中,若以O1’为中心且为第二尺寸的区域未超出图像a的边缘,则O1位于O1’处。若以O1’为中心且为第二尺寸的区域超出图像a的边缘,则O1位于,图像a中和该边缘重合且为第二尺寸的区域的中心。
O1’由图像a的O点和该滑动操作对应的滑动轨迹确定。具体的,O1位于O的第一方向,第一方向为该滑动轨迹的反方向;O1’和O点之间的距离,和,该滑动轨迹的长度正相关。在一些实施例中,O1’和O点之间的距离和该滑动轨迹的长度相同。其中,O为更新前的裁剪区域的中心,第一尺寸为更新前的裁剪区域的尺寸。
参考图6C,图6C示例性示出了电子设备100更新后的裁剪区域,更新后的裁剪区域为虚线框所在区域。刷新后的区域301A中所显示的预览图像为图像a1。
在一些实施例中,若用户输入滑动操作的速度超过阈值,则O1和O点之间的距离可以为默认距离。也即是说,当用户快速滑动时,电子设备按照默认距离确定O1。
类似的,参考图6D,电子设备100还可以检测到作用于区域301B的滑动操作(例如水平向右的滑动操作)。图6D和图6B所示用户界面相同,可参考相关描述。参考图6E,电子设备100可以响应于该滑动操作更新图像b中的裁剪区域,从而刷新区域301B中所显示的预览图像。电子设备更新图像b中裁剪区域的方式,可参考参考电子设备更新图像a中裁剪区域的方式。图6F,图6F示例性示出了电子设备100更新后的图像b中的裁剪区域,更新后的裁剪区域为虚线框所在区域。刷新后的区域301B中所显示的预览图像为图像b1。
通过图6A-图6F示出的实施例,电子设备100进入“多路录像模式”后,可以支持用户在多路录像时分别调整各个工作摄像头在其对应的预览区域中的取景,可实现各个工作摄像头在各自对应的预览区域中的取景互不影响,不会出现因某一个工作摄像头在对应预览区域中的取景改变而导致其他工作摄像头在对应预览区域中的取景也随时之改变的问题。这样的在多路录像中的取景方式更加灵活方便,可以提升用户体验。
通过上述图6A-图6F示出的实施例可知,电子设备可以显示预览界面和N(例如2)个摄像头各自采集的部分或全部图像。该预览界面包括N个区域,N个摄像头各自采集的部分或全部图像分别显示在该N个区域中。
这里,一般一个预览区域中显示相应摄像头的采集的部分图像。但是,在用户降低倍率(如0.7X)的情况下,一个预览区域中可能会显示相应摄像头采集的全部图像。
其中,第一区域(例如区域301A)可以为所述N个区域中的一个区域,对应于第一区域的摄像头可以被称为第一摄像头(例如区域301A对应的摄像头)。第一区域中可以显示 有未根据用户操作更改裁剪方式之前,电子设备通过裁剪第一摄像头采集的全部图像得到的图像。在未更改裁剪方式之前,本申请实施例对电子设备通过裁剪第一摄像头采集的全部图像的方式不作限制。在一个具体的实施例中,在未更改裁剪方式之前,本申请实施例对电子设备通过居中裁剪的方式裁剪第一摄像头采集的全部图像。
若电子设备检测到在第一区域的滑动操作,则电子设备可以根据该滑动操作更改裁剪第一摄像头采集的全部图像的方式,从而刷新该第一区域中显示的预览图像。刷新前后,电子设备裁剪第一摄像头采集的全部图像的方式不同。在第一摄像头采集的全部图像中,刷新前后的预览图像在第一摄像头采集的全部图像中的位置不同。
示例性地,刷新前第一区域中显示的预览图像的位置例如可以是图6A-图6F中更新前的裁剪区域所在位置,刷新前第一区域中显示的预览图像的位置例如可以是图6A-图6F中更新后的裁剪区域所在位置。
第二预览图像的位置和第一预览图像的位置的关系,可参考前文。
在一些实施例中,在第一摄像头采集的全部图像中,刷新前第一区域中显示的预览图像的中心位置,指向,刷新后第一区域中显示的预览图像的中心位置的方向,与该滑动操作的滑动方向相反。
如果该滑动用户操作为左滑操作,则刷新后第一区域中显示的预览图像,相较于,刷新前第一区域中显示的预览图像更接近第一摄像头采集的全部图像的右边界。
如果该滑动用户操作为右滑操作,则刷新后第一区域中显示的预览图像,相较于,刷新前第一区域中显示的预览图像更接近第一摄像头采集的全部图像的左边界。
在一些实施例中,刷新前第一区域中显示的预览图像与第一摄像头采集的全部图像的中心位置重合。
在一些实施例中,刷新前后第一区域中显示的预览图像一样大。
在一些实施例中,电子设备100进入“多路录像模式”后,可以在变焦场景下调整工作摄像头在预览区域中的取景。变焦是指预览界面中各个区域中显示的预览图像被放大或缩小。
图7A-图7F示例性示出了电子设备100在变焦场景下调整工作摄像头在预览区域中的取景的方式。
参考图7A,图7A示出的用户界面51为电子设备100进入“多路录像模式”后所显示的预览界面。用户界面51中包括预览框301、拍摄模式列表302、控件901、控件304、控件305。预览框301包括区域301A和区域301B。用户界面51中各个控件的作用、各个区域中显示的预览图像可以参考图6A所示的用户界面41中的相关描述,这里不再赘述。
如图7A及图7B所示,电子设备100可以在区域301A中检测到双指缩放手势(如图中所示的双指同时向外滑动的手势),并响应于该双指缩放手势,在区域301A中显示用于指示对应摄像头的变焦倍数的控件306,并且更新图像a中的裁剪区域,从而刷新区域301A中所显示的预览图像。图像a为区域301A对应的摄像头的采集的图像。
控件306可以实现为图标或者文字,控件306所指示的对应摄像头的变焦倍数随着双指缩放手势的变化而变化。当该双指缩放手势为双指放大手势时,该手势的幅度越大,该 对应摄像头的变焦倍数越大。当该双指缩放手势为双指缩小手势时,该手势的幅度越大,该对应摄像头的变焦倍数越小。例如,图7A中的文字“1x”表示摄像头的变焦倍数为1,图7B中的控件306中的文字“2x”表示摄像头的变焦倍数为2。
以检测到双指缩放手势前,区域301A对应摄像头的变焦倍数为1,接收到该双指缩放手势后,该摄像头的变焦倍数为x1为例进行说明。更新后的图像a中的裁剪区域的中心为图像a的O点,尺寸为第三尺寸。
第三尺寸的长度为第一尺寸长度的1/x1,第三尺寸的宽度为第一尺寸宽度的1/x1。即第一尺寸为第一尺寸的1/x12。其中,O为更新前的裁剪区域的中心,第一尺寸为更新前的裁剪区域的尺寸。
参考图7C,图7C示例性示出了电子设备100更新后的裁剪区域,更新后的裁剪区域为虚线框所在区域。刷新后的区域301A中所显示的预览图像为图像a2。
如图7B所示,由于图像a2和区域301A的尺寸不同,电子设备100可以将图像a2的像素使用"插值"处理手段做放大,从而将图像a2放大到整个区域301A显示。
在一些实施例中,当双指缩放手势为双指放大手势时,若该双指放大手势的幅度超过第一预设值,则使用可以自动将区域301A对应的摄像头切换为焦段更大的摄像头,例如从广角摄像头切换为长焦摄像头。当双指缩放手势为双指缩小手势时,若该双指缩小手势的幅度超过第二一预设值,则使用可以自动将区域301A对应的摄像头切换为焦段更小的摄像头,例如从广角摄像头切换为超广角摄像头。
图7D-图7F示例性示出了电子设备100在变焦场景下调整工作摄像头在预览区域中的取景的方式。
图7D所示的用户界面51和图7B所示的用户界面51相同,可参考相关描述。
参考图7D,电子设备100可以检测到作用于区域301A的滑动操作(例如水平向左的滑动操作)。本申请实施例对该滑动操作的方向、轨迹不做限制。
参考图7E,电子设备100可以响应于该滑动操作再次更新图像a中的裁剪区域,从而刷新区域301A中所显示的预览图像。图像a为区域301A对应的摄像头的采集的图像。
电子设备响应于该滑动操作再次更新图像a的裁剪区域的方式,和图6A-图6F中示出的电子设备响应于滑动操作更新图像a的裁剪区域的方式相同,可参考相关描述。
示例性地,再次更新的图像a中的裁剪区域的中心为O2,且为第三尺寸。
其中,若以O2’为中心且为第三尺寸的区域未超出图像a的边缘,则O2位于O2’处。若以O2’为中心且为第三尺寸的区域超出图像a的边缘,则O2位于,图像a中和该边缘重合且为第三尺寸的区域的中心。
O2’由图像a的O点和该滑动操作对应的滑动轨迹确定。具体的,O2’位于O的第一方向,第一方向为该滑动轨迹的反方向;O2’和O点之间的距离,和,该滑动轨迹的长度正相关。在一些实施例中,O2’和O点之间的距离和该滑动轨迹的长度相同。
参考图7F,图7F示例性示出了电子设备100再次更新后的裁剪区域,再次更新后的裁剪区域为虚线框所在区域。刷新后的区域301A中所显示的预览图像为图像a3。
在一些实施例中,若以O2’为中心且为第三尺寸的区域超出图像a的边缘,则电子设备100可以自动切换对应于区域301A的摄像头。电子设备100可以将对应于区域301A的 摄像头切换为视场角更大的摄像头。例如,若电子设备100初始的对应于区域301A的摄像头为长焦摄像头,则电子设备100可以将其切换为广角摄像头。这样可以充分满足用户在较大的视场角内调整各区域的取景的需求。
通过图7A-图7F示出的实施例,电子设备100进入“多路录像模式”后,可以支持用户在多路录像时分别调整各个工作摄像头在其对应的预览区域中的取景,可实现各个工作摄像头在各自对应的预览区域中的取景互不影响,不会出现因某一个工作摄像头在对应预览区域中的取景改变而导致其他工作摄像头在对应预览区域中的取景也随时之改变的问题。这样的在多路录像中的取景方式更加灵活方便,可以提升用户体验。
电子设备100在变焦场景下调整预览界面中各个区域的预览图像之后,还可以再次变焦,提高或者降低某一区域对应摄像头的变焦倍数。下面以电子设备100在执行如图7A-图7F所示的变焦场景下调整工作摄像头在预览区域中的取景之后,针对区域301A对应的摄像头再次变焦为例进行说明。
在一些实施例中,电子设备100在变焦场景下调整预览界面中各个区域的预览图像之后,可以检测到作用于区域301A的双指缩放手势(例如双指同时向外滑动的手势),并响应于该双指放大手势,更新图像a中的裁剪区域,从而刷新区域301A中所显示的预览图像。图像a为区域301A对应的摄像头的采集的图像。这里,电子设备100响应于该双指缩放手势更新图像a中的裁剪区域的方式,和图7A-图7C中示出的电子设备响应于双指缩放手势更新图像a的裁剪区域的方式相同,可参考相关描述。
示例性地,参考图8A,其示出了一种可能的更新后的图像a中的裁剪区域。如图8A所示,刷新后的区域301A中所显示的预览图像为图像a4。
示例性地,参考图8B,其示出了另一种可能的更新后的图像a中的裁剪区域。如图8B所示,刷新后的区域301A中所显示的预览图像为图像a4。
通过上述图7A-图7F示例性示出的电子设备100在变焦场景下调整工作摄像头在预览区域中的取景的方式可知,
电子设备还可以在检测到滑动操作(例如图7A-图7F中的滑动操作)之前,还检测到用于更改变第一区域对应的摄像头的变焦倍数的操作(例如图7A-图7F中的双指缩放操作)。之后,电子设备可以响应于该操作,将接收到该操作之前第一区域中显示的预览图像放大,并在第一区域中显示放大后的该预览图像。需要注意的是,更改变第一区域对应的摄像头的变焦倍数后,第一区域中显示放大后的部分图像,具体可参考图7A-图7F的相关描述。
在一些实施例中,电子设备100进入“多路录像模式”后,可以追踪目标物体,并根据该目标物体所在的位置自主调整工作摄像头在预览区域中的取景。这样可以减少用户操作,提高便捷性。
图9A-图9E示例性示出了电子设备追踪目标物体并自主调整工作摄像头在预览区域中的取景的方式。
参考图9A,图9A示出的用户界面71为电子设备100进入“多路录像模式”后所显示的预览界面。用户界面71中包括预览框301、拍摄模式列表302、控件901、控件304、控 件305。预览框301包括区域301A和区域301B。用户界面71中各个控件的作用、各个区域中显示的预览图像可以参考图6A所示的用户界面41中的相关描述,这里不再赘述。
在一些实施例中,电子设备100进入“多路录像模式”后,可以自动识别预览框301中各个区域中显示的预览图像中的物体,在检测到有预设种类的物体时提示用户。该预设种类的物体可包括:人脸、动物、人体、太阳、月亮等等。该预设种类的物体可以由电子设备100默认设置,也可以由用户自主选择。
在另一些实施例中,电子设备100进入“多路录像模式”后,可以响应于接收到的用户操作,开始识别预览框301中各个区域中显示的预览图像中的物体,在检测到有预设种类的物体时提示用户。该用户操作可以是在区域上的长按操作或双击操作、输入的语音指令等等,本申请实施例对此不做限制。
示例性地,如图9A所示,电子设备100可以检测到区域301B中显示有人脸,并在区域301B中显示提示信息307。提示信息307用于提示用户检测到人脸,提示信息307可以为文本“检测到人脸”。
在一些实施例中,参考图9B,电子设备100可以检测到作用于区域301B中的物体(如图9B中的人脸)上的触摸操作(例如点击操作),将该触摸操作所作用于的物体选定为将要追踪的目标物体。在一些实施例中,电子设备100可以在选定将要追踪的目标物体后,在区域301B中显示该目标物体的区域显示提示信息,例如图9B中所示的虚线框,从而提示用户当前已经选定该物体作为将要追踪的目标物体。
在其他一些实施例中,电子设备100还可以在检测到预设种类的物体后,直接将该物体选定为将要追踪的目标物体,无需用户操作。
电子设备100选定将要追踪的目标物体后,将以该目标物体为中心更新图像b中的裁剪区域,从而刷新区域301B中所显示的预览图像。图像b为区域301B对应的摄像头的采集的图像。
电子设备100选定将要追踪的目标物体后,区域301B中的提示信息307可以用于提示用户当前正在追踪目标物体。示例性地,参考图9B及图9C,提示信息307可以更改为文本“人脸追踪中”。
参考图9C及图9E,如果区域301B对应的前置摄像头193-1采集到的图像b中仍然包含目标物体,则更新后的图像b中裁剪区域的中心为图像b中的O4,且为第四尺寸。O4为更新前的图像b的裁剪区域的中心,第四尺寸为更新前的图像b的裁剪区域的尺寸。
其中,若以O4’为中心且为第四尺寸的区域未超出图像b的边缘,则O4位于O4’处。若以O4’为中心且为第四尺寸的区域超出图像b的边缘,则O4位于,图像b中和该边缘重合且为第四尺寸的区域的中心。O4’为图像b中目标物体所在的中心。
参考图9D及图9E,其示例性示出了电子设备100更新后的裁剪区域,更新后的裁剪区域为虚线框所在区域。刷新后的区域301B中所显示的预览图像为虚线框中的图像。
如图9D及图9E所示,前置摄像头193-1的采集的图像中的目标人脸变换了位置,但电子设备100仍然将该目标人脸显示在了区域301B的中心位置。
如果一段时间后,区域301B对应的前置摄像头193-1的采集的图像中的中不包含目标物体,则电子设备100可以停止追踪该目标人物。在一些实施例中,电子设备100可以提 示用户当前已停止追踪该目标人物,提示的方式可包括但不限于:显示文本、显示图标、播放语音等等。
通过图9A-图9E示出的实施例,电子设备100可以在多路录像过程中追踪目标物体,满足用户需求,提升用户体验。
通过图9A-图9E示出的实施例,电子设备在检测到第一摄像头采集的全部图像中包括第一人脸(例如图9A-图9E中的人脸图像)的图像后,可以在第一区域(例如图9A-图9C中的区域302B)中显示预览图像。之后,若电子设备检测到第一人脸的图像在第一摄像头采集的全部图像中的位置发生改变,则电子设备刷新该第一区域中的预览图像。其中,刷新前的第一区域中显示的预览图像是通过裁剪第一摄像头采集的全部图像得到的,且包括第一人脸的图像。刷新后的第一区域中显示的预览图像是通过裁剪第一摄像头采集的全部图像得到的,且包括第一人脸的图像。
示例性地,刷新前的第一区域中显示的预览图像可参考如图9A或图9B中区域301B中显示的预览图像,刷新后的第一区域中显示的预览图像可参考如图9C中区域301B中显示的预览图像。
在一些实施例中,电子设备裁剪得到刷新后的第一区域中显示的预览图像方式可以为:保证第一人脸的图像在刷新后的第一区域中的位置,和第一人脸的图像在刷新前的第一区域中的位置,相同。这样可以保证人脸第一区域中的位置固定。
在一些实施例中,电子设备根据第一人脸的图像在第一摄像头采集到的全部图像中的位置为中心,裁剪得到刷新后的第一区域中显示的预览图像,例如图9A所示。这样可以在人脸追踪时,保持人脸总是居中显示在第一区域中。
在本申请另一些实施例中,电子设备检测到第一摄像头采集的全部图像中包括第一人脸的图像时,可以启动第二摄像头,第二摄像头的取景范围大于第一摄像头的取景范围,第一人脸在第二摄像头的取景范围内。之后,电子设备可以刷新第一区域中显示的预览图像。刷新后的第一区域中显示的预览图像是通过裁剪第二摄像头采集的全部图像得到的,且包括第一人脸的图像。这样可以保证追踪物体时,切换到摄像头,扩大可追踪范围。
在一些实施例中,第一摄像头为前置摄像头或后置摄像头。这样既可以使用前置摄像头实现物体追踪,也可以使用后置摄像头实现物体追踪。
在本申请实施例中,电子设备100进入“多路录像模式”后,在调整工作摄像头在预览区域中的取景范围时,还可以通过画中画的方式提示用户该工作摄像头在预览区域中显示的预览图像在采集的全部图像中的位置。这样可以使得用户了解全局。
图10A-图10B示例性示出了电子设备100通过画中画的方式提示用户工作摄像头在预览区域中显示的预览图像在采集的全部图像中的位置的场景。
参考图10A及图10B,图10A及图10B所示的用户界面81为电子设备100进入“多路录像模式”后所显示的预览界面。该预览界面中的各个控件可以参考图6A所示的用户界面41中的相关描述,这里不再赘述。
如图10A及图10B所示,用户界面81中可以显示有窗口308。窗口308可悬浮显示于区域301A所显示的图像之上。窗口308可用于提示用户当前区域301A中显示的预览图像 在对应的后置广角摄像头193-3的采集的全部图像中的位置。
如图10A及图10B所示,窗口308中可以显示有区域301A对应的后置广角摄像头193-3的采集的图像,并以虚线框标识出区域301A中显示的预览图像在摄像头采集的全部图像中的位置。在窗口308中,虚线框里面和外面的部分可以使用不同的显示形式,例如虚线框以外的部分可以添加阴影,以进一步区分后置广角摄像头193-3所采集到的图像a中的被裁剪且在区域301A中所显示的部分。
如图10B所示,当区域301A中显示的预览图像位于后置广角摄像头193-3所采集到的图像a中的位置发生变化时,窗口308中虚线框中的位置也对应发生变化。
通过图10A-图10B所示的实施例,可以使得用户通过各区域上显示的窗口了解各个摄像头在对应区域中的取景范围,以及各区域当前显示的预览图像在全局中的位置。这样可以使得用户更加方便地调整各个区域中显示的预览图像。
可理解的,图10A-图10B示例性示出的画中画提示方式,适用于上述实施例提及的任意一种电子设备100调整预览界面中各个区域中显示的预览图像的场景。
电子设备100在图6A-图6F示出的非变焦场景下调整预览界面中各个区域中显示的预览图像时,可以使用画中画的方式来提示用户区域301A中显示的预览图像在摄像头采集的全部图像中的位置。在这种情况下,虚线框位于窗口308中的位置随着用户输入的滑动操作而变化。虚线框的移动方向和滑动操作的轨迹方向相反。
电子设备100在图7A-图7F示出的在变焦场景下调整预览界面中各个区域中显示的预览图像时,可以使用画中画的方式来提示用户区域301A中显示的预览图像在摄像头采集的全部图像中的位置。在这种情况下,虚线框的大小和变焦倍数成反比例关系,变焦倍数越大,虚线框越小。虚线框的移动方向和用户输入的滑动操作的轨迹方向相反。
电子设备在图9A-图9E示出的根据目标物体所在的位置自主调整预览界面中各个区域的预览图像时,可以使用画中画的方式来提示用户区域301A中显示的预览图像在摄像头采集的全部图像中的位置。在这种情况下,虚线框的位置随着摄像头采集的图像中的目标物体的位置变化而变化。
通过上述图6A-图6F、图7A-图7F、图8A-图8B、图9A-图9E所示的实施例,用户可以在不移动电子设备100的情况下,即在电子设备100的姿态未发生改变的情况下,通过用户操作调整各个工作摄像头的取景,并且针对单个工作摄像头的取景的调整不影响其他工作摄像头的取景。
在一些实施例中,如果电子设备100检测到调整预览界面中各个区域所显示的预览图像的操作时(例如上述实施例提及的滑动操作、双指缩放操作或选中目标物体的操作),若电子设备的姿态发生改变,则电子设备可以不响应于该用户操作更改裁剪各区域对应摄像头采集到的全部图像的方式。即,若电子设备的姿态发生改变,电子设备不对用于调整各摄像头在预览区域中的取景的用户操作作出响应。
在一些实施例中,若电子设备的姿态发生大幅度的改变,即用户大幅度移动或转动手机时,则电子设备可以使用居中裁剪的方式得到预览界面中各区域所显示的预览图像。
下面介绍电子设备100开启“多路录像模式”后,在多路录像的录像过程中调整拍摄 界面中各个区域中显示的预览图像的UI实施例。
图11A-图11F示例性示出了电子设备100在多路录像的录像过程中调整拍摄界面中各个区域中显示的预览图像的UI实施例。
图11A示例性示出了电子设备开启“多路录像模式”后,进入录像过程时所显示的拍摄界面101。
如图11A所示,拍摄界面101包括:预览框301、拍摄模式列表302、控件901、控件304、控件305。其中:拍摄模式列表302、控件304、控件305可以参考用户界面31中的相关描述,这里不再赘述。如图11A所示,多路录像模式选项302D被选定。
拍摄界面101可以是电子设备100响应于在用于录像的控件上接收到的触控操作(例如点击操作)而显示的。用于录像的控件例如可以是图11A-图11F的任意一个用户界面中所显示的控件901。用于录像的控件也可以被称为拍摄控件。
如图11A所示,拍摄界面101中还包括:录制时间指示符1001。录制时间指示符1001用于指示用户显示该拍摄界面101的时长,即电子设备100开始录制视频的时长。录制时间指示符1001可以实现为文字。
电子设备100开启“多路录像模式”后,在拍摄界面中的预览框301,和,预览界面的预览框301相同。该预览框301的布局方式可参考可参照前文图6A实施例的相关描述,这里不再赘述。
电子设备100开启“多路录像模式”后的录像过程中,也可以根据用户操作调整拍摄界面中各个区域中显示的预览图像。电子设备100在录像过程中调整拍摄界面中各个区域中显示的预览图像的方式,和电子设备100在预览过程中调整预览界面中各个区域中显示的预览图像的方式相同,可参考前文图6A-图6F、图7A-图7F、图8A-图8B、图9A-图9E所示的实施例。
在本申请实施例中,电子设备100进入“多路拍照模式”后的录像过程中,还可以保存该录像过程中在预览界面中所显示的图像。
具体的,用户在电子设备100的“多路拍照模式”下的录像过程中,可以在调整好预览框中各个区域的预览图像后,选择保存录像过程中在预览框中的预览图像,即保存视频。用户调整预览框中各个区域的预览图像的方式可参考前文图11A-图11F实施例所描述的相关内容。
示例性地,电子设备100可以在录像过程中,响应于在用于录像的控件上检测到的触控操作(例如点击操作),录像过程中在预览框中的预览图像。用于录像的控件例如可以是图11A-图11F的任意一个用户界面中所显示的控件901。该录像过程的起止时间分别为电子设备100开启“多路录像模式”后,相邻检测到的两次在控件901上的触控操作的时间点。
在一些实施例中,电子设备100可以将预览框中各个区域在录像过程中所显示的图像合成一个视频文件,并保存该视频文件。例如,电子设备100可以将录像过程中区域301A中所显示的图像、区域302B中所显示的预览图像对应的布局方式合成一个视频文件,并保存该视频文件。这样可以使得用户根据自身需求调整各区域的预览图像后,保存想要的预 览图像,可以使得用户获得更加灵活方便的录像体验。
在另一些实施例中,电子设备100也可以分别保存录像过程中各个区域中所显示的图像,并将保存的该多路图像进行关联。
可理解的,用户可以在录像过程中更改预览界面的布局方式。若电子设备100在录像过程中更改了预览界面的布局方式,则电子设备100保存的视频文件在不同时间段的布局方式可能不同。这样可以为用户提供更加灵活的录像体验。
电子设备100将预览框中所显示的预览图像存储为视频文件之后,用户可以在“图库”提供的用户界面中,查看电子设备100保存的该视频文件。
本申请实施例还提供了一种在移动电子设备,即电子设备的姿态发生改变的情况下,支持电子设备不更改选中的预览区域的取景范围的方案。这样即使在电子设备的姿态发生改变的情况下,能不影响该被选中的预览区域的取景范围。
电子设备100进入“多路录像模式”后,可以在预览过程或者录像过程中锁定预览框中的一个或多个区域。之后,即使电子设备100的物理位置发生改变,例如电子设备100发生平移,被锁定的区域中所显示的图像中的静态物体在该区域中的相对位置不变。这样可以使得用户在移动电子设备100以更改其他区域所显示的图像时,保证被锁定的区域总是显示现实世界中某个物理位置的图像,即不更改该被锁定区域的取景范围。
图12A-图12B示例性示出了电子设备100在姿态发生改变的情况下,不更改选中的预览区域的取景范围的示例性UI界面。
参考图12A可以为电子设备100进入“多路录像模式”后,响应于用于锁定区域301B的操作所显示的预览界面111。
如图12A所示,该预览界面111包括预览框301、拍摄模式列表302、控件303、控件304、控件305以及锁定指示符1101。预览框301包括区域301A和区域301B。用户界面71中各个控件的作用、各个区域中显示的预览图像可以参考图6A所示的用户界面41中的相关描述,这里不再赘述。其中,锁定指示符1101位于区域301B中,用于指示区域301B被锁定。锁定指示符1101可以实现为文本、图标或者其他形式。
用于锁定区域301B的操作可包括但不限于:作用于区域301B上的长按操作、双击操作、作用于特定控件(图12A中未示出)上的触控操作、摇晃电子设备100的操作等等。
参考图12A及图12B,用户可以手持电子设备100水平向左移动。水平移动电子设备100后,区域301A对应的后置广角摄像头193-3、区域301B对应的前置摄像头193-1所采集的图像均被刷新或者更新。
电子设备100响应于该被水平向左移动的操作,电子设备100按照当前区域301A对应的裁剪方式对后置广角摄像头193-3所采集的全部图像做裁剪后,将其显示于区域301A中,该裁剪方式可以为居中裁剪或者其他根据用户操作确定的裁剪方式。
电子设备100响应于该被水平向左移动的操作,保持在被锁定的区域301B中显示该区域301B被锁定时的图像中的静态物体的显示方式不变。静态物体的显示方式包括:该静态物体的大小、该静态物***于区域301B中的相对位置。也即是说,电子设备100保证被锁定的区域显示现实世界中同一个物理位置的图像,该物理位置该区域被锁定时该区域中所 显示的预览图像对应的物理位置。例如,参考图12A及图12B,电子设备100水平移动后,区域301B仍然显示同一个物理位置的图像,图中的云朵、建筑物以及道路的显示方式不变,图中的人物变换了所站的位置。
通过扩展实施例,在用户看来,在移动电子设备100的过程中,可以保证其中的一个或多个区域被锁定,即可以保证该被锁定的一个或多个区域中显示的预览图像总是对应于同一个物理位置。这样可以有利于用户在多路拍摄过程中兼顾多路图像。
可理解的,扩展实施例可以应用于本申请实施例提及的“多路录像模式”下的预览过程中、“多路录像模式”下的预览过程中以及“多路录像模式”下的录像过程中。也就是说,电子设备100可以“多路录像模式”下的预览过程中、“多路录像模式”下的预览过程中以及“多路录像模式”下的录像过程中锁定预览框中的一个或多个区域,其具体实现可结合前文实施例描述的预览过程以及录像过程得到,在此不赘述。
在本申请实施例中,电子设备还可以在进入“多路拍照模式”后,调整预览界面中各个区域中显示的预览图像。即,电子设备还可以在“多路拍照模式”中,调整工作摄像头在预览区域中的取景的方式。
电子设备在“多路拍照模式”中,调整工作摄像头在预览区域中的取景的方式,可参考电子设备在“多路录像模式”中,调整工作摄像头在预览区域中的取景的方式,可参考前文相关描述,暂不赘述。
图13A-图13B示例性给出了一种电子设备进入“多路拍照模式”后,响应于滑动操作调整预览界面中各个区域中显示的预览图像的场景。
下面结合图14描述本申请实施例中电子设备100的软硬件如何协作,执行本申请实施例提供的多路拍录像的取景方法。
如图14所示,电子设备100进入“多路拍摄模式”后,可以通过N个摄像头采集数据。
N个摄像头中每个摄像头均按照默认的出图比例出帧,并将采集的原始数据传递给对应的ISP。摄像头默认的出图比例例如可以为4:3、16:9或3:2等等。
ISP用于将来自摄像头的数据转化为标准格式的图像,例如YUV等。
显示屏可以监听用于调整显示屏中各区域的预览图像的用户操作,并将监听到的用户操作上报给摄像头或者HAL层。该用户操作可包括但不限于上述UI实施例中提及的,电子设备100进入“多路拍摄模式”后在预览框中的各个区域上检测到的滑动操作、双指缩放操作和滑动操作、作用于目标物体的触控操作等等。例如,显示屏可以监听变焦事件,并将变焦倍数传递给HAL层和对应的摄像头。显示屏还可以监听切换摄像头的事件,并将该事件传递给对应的摄像头。显示屏可以监听滑动操作,并将滑动操作的轨迹传递给HAL层。
HAL层用于根据用户操作对ISP输出的图像做裁剪。
当显示屏或其他部件(例如麦克风)未监听到用于调整显示屏中各区域的预览图像的用户操作时,HAL层按照居中裁剪的方式来对ISP输出的图像做裁剪。
当显示屏或其他部件(例如麦克风)监听到用于调整显示屏中各区域的预览图像的用户操作时,HAL层根据该用户操作来对ISP输出的图像做裁剪。这里,用于调整显示屏中 各区域的预览图像的用户操作可包括但不限于上述UI实施例中提及的,电子设备100进入“多路拍摄模式”后在预览框中的各个区域上检测到的滑动操作、双指缩放操作和滑动操作、作用于目标物体的触控操作等等。HAL层根据用户操作来对ISP输出的图像做裁剪的方式可参考上述UI实施例中的相关描述。
之后,HAL层可以将自身对ISP输出的图像做裁剪的方式通知给ISP,由ISP根据该裁剪方式,对裁剪出的图像做自动曝光、自动白平衡、自动对焦(auto exposure,auto white balance,auto focus,3A)处理,还可以对裁剪出的图像的噪点,亮度,肤色进行算法优化。
经过HAL层的裁剪且经过ISP的3A及优化处理后的图像被传递给图像处理模块,图像处理模块用于对接收到的图像做电子图像防抖(electronic image stabilization)处理。HAL层还可以将自身对ISP输出的图像做裁剪的方式通知给图像处理模块,以使得图像处理模块根据该裁剪方式对接收到的图像做防抖处理。
图像处理模块可以对接收到的各路图像做处理后,可以得到N路图像。图像处理模块可以将得到的N路图像按照当前的布局风格拼接或者叠加为一路图像,并将其输出到显示屏的预览界面中。也就是说,该预览界面可以根据当前的布局风格在N个区域中显示该N路图像。
图像处理模块可包括电子设备100中的图像处理器、视频编解码器、数字信号处理器等等。
在一些实施例中,若电子设备100进入“多路拍照模式”后,显示屏检测到在拍摄控件上检测到的触控操作,则电子设备将保存检测到该触控操作时图像处理模块输出到显示屏的预览界面中的图像。
在一些实施例中,若电子设备100开启“多路录像模式”后,显示屏检测到在拍摄控件上检测到的两次触控操作,则电子设备将保存在该两次触控操作中间的时间段中,图像处理模块输出到显示屏的预览界面中的图像。
上述UI实施例中,在多路录像场景下,预览框301中的区域301A或区域301B可以被称为第一区域。例如,图6A-图6F、图7A-图7F、图9A-图9E、图10A-图10B、图11A-图11F中的区域301A或区域301B。
区域301A或区域301B对应的摄像头,例如前置摄像头或后置摄像头可以被称为第一摄像头。
预览界面中的区域301A或区域301B接收的滑动操作,可以被称为第一用户操作。例如图6A、图6D、图7D中的滑动操作。
电子设备在拍摄界面中检测到的指示开始录制视频的操作,可以被称为第二用户操作。第二用户操作例如可以是作用于拍摄控件901上的操作,例如作用于图11A中的拍摄控件901上的操作。
在检测到第一用户操作之前,区域301A或区域301B所显示的预览图像可以被称为第一预览图像,例如图6A-图6F实施例中的区域301A或区域301B所显示的图像,以及,图7D中的区域301A中所显示的图像(图像a2)。
在检测到第一用户操作之后,区域301A或区域301B所显示的预览图像可以被称为第二预览图像,例如图6A-图6F实施例中的区域301A或区域301B所显示的图像(图像a1或b1),以及,图7E中的区域301A中所显示的图像。
预览界面中的区域301或区域301B接收到的双指放大操作,可以被称为第三用户操作。例如图7A所示的双指放大操作。
拍摄界面中的区域301或区域301B接收的滑动操作,可以被称为第四用户操作。例如图11A中的滑动操作。在检测到第四用户操作之后,拍摄界面中的区域301A或区域301B所显示的预览图像可以被称为第三预览图像,例如图11A中的区域301A所显示的图像。
当电子设备的姿态发生改变时,电子设备在第一区域中显示的图像可以被称为第四预览图像。
电子设备在第一摄像头采集到的全部图像中检测到人脸时,在第一区域中显示的图像可以被称为第五预览图像,例如图9A中区域301B中所示的图像。在电子设备检测到的人脸在第一摄像头采集到的全部图像中的位置变化时,电子设备在第一区域中显示的图像可以被称为第六预览图像,例如图9B或图9C中所示的图像。
电子设备在追踪人脸时,若切换了摄像头,则可以将切换后的摄像头称为第二摄像头,此时第一区域中显示的来自第二摄像头的图像可以被称为第七预览图像。在电子设备检测到的人脸在第二摄像头采集到的全部图像中的位置变化时,电子设备在第一区域中显示的图像可以被称为第八预览图像。
电子设备在拍摄界面中检测到的指示停止录制视频的操作,可以被称为第五用户操作。第五用户操作例如可以是作用于拍摄控件901上的操作,例如作用于图11B-图11F中的拍摄控件901上的操作。
电子设备检测到的用于播放视频文件的操作,可以被称为第六用户操作。电子设备中用于播放该视频文件的界面,可以被称为播放界面。
在电子设备的姿态发生改变且不更改选中的预览区域的取景范围的场景中,用于锁定区域的操作可以被称为第七用户操作。第七用户操作例如可以是作用于区域301B上的长按操作、双击操作。锁定区域后,电子设备检测到的指示开始录制视频的操作,可以被称为第八用户操作。第八用户操作例如可以是作用于图12B中的拍摄控件901上的点击操作。
在检测到第七用户操作之后,电子设备的姿态发生改变之前,区域301A或区域301B所显示的预览图像可以被称为第九预览图像,例如图12A中区域301A所显示的图像。在电子设备的姿态发生改变之后,区域301A或区域301B所显示的预览图像可以被称为第十预览图像,例如图12B中区域301A所显示的图像。
基于前述内容介绍的电子设备100以及前述UI实施例,下面实施例介绍本申请提供的多路录像的取景方法。如图15所示,该方法可包括:
阶段1(S101-S105):打开“多路录像模式”
S101,电子设备100启动相机应用程序。
示例性地,电子设备100可以检测到作用于如图3A所示的相机的图标220的触控操作(如在图标220上的点击操作),并响应于该操作启动相机应用程序。
S102,电子设备100检测到选择“多路录像模式”的用户操作。
示例性地,该用户操作可以是图3B或图3D所示的在多路录像模式选项302D上的触控操作(例如点击操作)。该用户操作也可以是语音指令等其他类型的用户操作。
不限于用户选择,电子设备100可以在启动相机应用程序后默认选定“多路录像模式”。
S103,电子设备100启动N个摄像头,N为正整数。
具体的,电子设备可具有M个摄像头,M≥2,M≥N,M为正整数。这N个摄像头可以为前置摄像头和后置摄像头的组合。这N个摄像头也可以为广角摄像头、超广角摄像头、长焦摄像头或前置摄像头中任意多个摄像头的组合。本申请对这N个摄像头的摄像头组合方式不作限制。
这N个摄像头可以为电子设备默认选定的,例如电子设备默认开启前置摄像头和后置摄像头这两个摄像头。这N个摄像头也可以是用户选择的,例如用户可以在“更多”模式选项中选择开启哪些摄像头。
S104,电子设备100通过这N个摄像头采集图像。
S105、电子设备100显示预览界面,预览界面包括N个区域,这N个摄像头各自采集的部分或全部图像可分别显示在这N个区域中。
如图6A所示,预览界面包括区域301A、区域301B,区域301A中显示后置摄像头采集的部分图像,区域301B中显示前置摄像头采集的部分图像。此时,N=2,这N个摄像头为后置摄像头和前置摄像头。
这N个区域中各自显示的图像可称为预览图像。一个区域中显示的预览图像可以通过裁剪该区域对应的摄像头采集的全部图像得到。
以图6A所示的预览界面为例,区域301A中显示的预览图像可以为电子设备从后置摄像头采集的全部图像中裁剪得到的,区域301B中显示的预览图像为电子设备从前置摄像头采集的全部图像中裁剪到的。具体的,区域301A中显示的预览图像的中心位置可以和后置摄像头采集的全部图像的中心位置重合,区域301B中显示的预览图像的中心位置可以和前置摄像头采集的全部图像的中心位置重合。此时,区域301A、区域301B中显示的预览图像是通过居中裁剪方式得到的。
在一倍倍率下,从后置摄像头采集的全部图像中裁剪出区域301A中显示的预览图像的裁剪区域的尺寸,可以和区域301A的尺寸一样大。同样的,在一倍倍率下,从前置摄像头采集的全部图像中裁剪出区域301B中显示的预览图像的裁剪区域的尺寸,可以和区域301B的尺寸一样大。
可能的,当降低这N个摄像头中的某个摄像头的变焦倍率时,例如降低到0.7X,该摄像头在其对应区域中显示的预览图像可以是该摄像头采集的全部图像。例如,用户可以在区域301A通过执行捏合双指的操作来降低变焦倍率,以在区域301A中查看到后置摄像头采集的全部图像。捏合双指的操作也可以被称为双指缩小操作。
不限于图6A所示的横向分屏方式,区域301A和区域301B在预览界面中的布局方式可以有多种,例如画中画方式等,本申请对此不作限制。
阶段2(S106-S107):调整某个摄像头在预览界面中的取景范围
S106,电子设备100检测到在第一区域的第一用户操作。其中,第一区域可以为这N个区域中的一个区域,第一预览图像可显示在第一区域中,第一预览图像是通过裁剪第一摄像头采集的全部图像得到的。
以图6A-图6B所示的预览界面为例,第一区域可以是区域301A,第一预览图像可以是区域301A中显示的预览图像,第一摄像头可以是后置摄像头。此时,第一用户操作可以是在区域301A中的滑动操作,例如左滑操作、右滑操作等。第一用户操作也可以是针对区域301A的语音指令等其他类型的用户操作。
S107,电子设备100在第一区域中显示第二预览图像。其中,第二预览图像也是通过裁剪第一摄像头采集的全部图像得到的。在第一摄像头采集的全部图像中,第二预览图像的位置不同于所述第一预览图像的位置。
以图6A至图6F所示的预览界面为例,当检测到在区域301A中的左滑操作时,区域301A中显示的第二预览图像和第一预览图像相比,第二预览图像的中心位置偏离了第一预览图像的中心位置,即不再是后置摄像头采集的全部图像的中心位置。这样,用户就可以通过滑动操作来改变后置摄像头在区域301A中呈现的取景范围。
具体的,当第一用户操作为左滑操作时,第二预览图像相较于第一预览图像更接近第一摄像头采集的全部图像的右边界。如图6A至图6C所示,图6B所示的区域301A中显示预览图像相较于图6A所示的区域301A中显示预览图像更接近后置摄像头采集的全部图像的右边界。这样,用户就可以通过在区域301A中的左滑操作来看到后置摄像头采集的全部图像中更靠近右边界的图像,例如让后置摄像头采集的全部图像中的靠右边景物出现在区域301A中。
具体的,当第一用户操作为右滑操作时,第二预览图像相较于第一预览图像更接近第一摄像头采集的全部图像的左边界。如图6D至图6F所示,图6F所示的区域301B中显示预览图像相较于图6D所示的区域301B中显示预览图像更接近前置摄像头采集的全部图像的左边界。这样,用户就可以通过在区域301B中的右滑操作来看到前置摄像头采集的全部图像中更靠近左边界的图像,例如让后置摄像头采集的全部图像中的靠左边景物出现在区域301A中。
第二预览图像和第一预览图像可以一样大。第一预览图像的中心位置可以和第一摄像头采集的全部图像的中心位置重合。
在第一区域中的第一用户操作发生时,第一区域中的预览图像从第一预览图像变化为第二预览图像,但是预览界面中的其他区域中的预览图像的取景范围并没有发生改变。即,当检测到第一用户操作时,在这N个摄像头中的另一个摄像头(可称为第二摄像头)采集的全部图像中,预览图像B的位置和预览图像A的位置是相同的。其中,预览图像A为在第一用户操作发生前显示在另一个区域(可称为第二区域)中的预览图像,预览图像B为在第一用户操作发生后显示在第二区域中的预览图像。这样,用户就可以单独调整某个摄像头在预览界面中呈现的取景范围,而不影响其他摄像头在预览界面中呈现的取景范围。
在第一区域中的第一用户操作发生后,即在用户调整第一摄像头在预览界面中的取景范围之后,用户还可以调整其他摄像头在预览界面中的取景范围。例如,电子设备可以检测到在另一个区域(可称为第二区域)中的左滑或右滑等用户操作,并将第二区域中显示 的预览图像从预览图像C变为预览图像D。在第二摄像头采集的全部图像中,预览图像D的位置不同于预览图像C的位置。这样,用户可以通过第二区域中的左滑或右滑等用户操作,改变第二摄像在第二区域中呈现的取景范围。
阶段3(S108-S109):录制视频
S108,电子设备100检测到第二用户操作。第二用户操作为指示开始录制视频的用户操作,例如在图6A所示的控件303上的点击操作。
S109,电子设备100开始录制视频,并显示拍摄界面,该拍摄界面也包括前述N个区域。
在录像过程中,用户也可以通过左滑或右滑等用户操作调整一个摄像头在拍摄界面中呈现的取景范围,具体过程同于用户调整摄像头在预览界面中的取景范围。同样的,在电子设备检测到作用在第一区域中用户操作(例如左滑或右滑等用户操作)时,在第一摄像头采集的全部图像中,显示在第一区域中的预览图像的位置不同于之前显示在在第一区域中的预览图像的位置。这里,之前是指在电子设备检测到第一区域中的该用户操作(例如左滑或右滑等用户操作)之前。
继在预览界面中调整某个摄像头的取景之后,用户还可以通过左滑或右滑等用户操作接着调整该摄像头在拍摄界面中呈现的取景范围。具体的,电子设备可以在拍摄界面的第一区域中检测到左滑或右滑等用户操作(可称为第四用户操作),在第一区域中显示第一摄像头的第三预览图像,第三预览图像是通过裁剪第一摄像头采集的全部图像得到的。在第一摄像头采集的全部图像中,所述第三预览图像的位置不同于所述第二预览图像的位置。
阶段4(S110-S113):完成录制视频,播放视频文件
S110,电子设备检测到指示停止录制视频的用户操作,例如在图6A所示的控件303上的点击操作。该用户操作可称为第五用户操作。
S111,电子设备停止录制视频,并生成视频文件。
具体的,视频文件中的每一帧图像包括各个区域中显示的预览图像,具体可以先对各个区域中显示的预览图像进行拼接处理。
S112,电子设备检测到打开该视频文件的用户操作(可称为第六用户操作)。
S113,电子设备显示播放界面,该播放界面也包括前述N个区域。
可以看出,本申请实施例提供的多路录像的取景方法可使得用户在多路拍摄时能够分别调整各个工作摄像头在预览框中呈现的取景,可实现各个工作摄像头的取景互不影响,不会出现因某一个工作摄像头的取景改变而导致其他工作摄像头的取景的也随时之改变的问题。
进一步的,本申请实施例提供的多路录像的取景方法还可以提供人脸追踪功能。具体的,当电子设备检测到第一摄像头采集的全部图像中包括第一人脸的图像时,电子设备可以在第一区域中显示第五预览图像,第五预览图像是通过裁剪第一摄像头采集的全部图像得到的,第五预览图像可包括第一人脸的图像。当电子设备检测到第一人脸的图像在第一摄像头采集的全部图像中的位置发生改变时,电子设备在第一区域中显示第六预览图像,第六预览图像是通过裁剪第一摄像头采集的全部图像得到的,第六预览图像也包括第一人脸的图像。
其中,第一人脸的图像在第六预览图像中的位置可同于第一人脸的图像在第五预览图像中的位置。第一人脸的图像可以在第五预览图像的中心区域。
为了进一步扩大人脸的可追踪范围,当电子设备检测到第一摄像头采集的全部图像中包括第一人脸的图像时,电子设备可以启动第二摄像头。第二摄像头可以是广角摄像头或超广角摄像头,其取景范围大于第一摄像头的取景范围。第一人脸在第二摄像头的取景范围内。在这种情况下,电子设备可以在第一区域中显示第七预览图像,当电子设备检测到第一人脸的图像在第二摄像头采集的全部图像中的位置发生改变时,电子设备在第一区域中显示第八预览图像。第七预览图像是通过裁剪第二摄像头采集的全部图像得到的,第七预览图像包括第一人脸的图像。第八预览图像是通过裁剪第二摄像头采集的全部图像得到的,第八预览图像包括第一人脸的图像。
其中,第一人脸的图像在第七预览图像中的位置可同于第一人脸的图像在第八预览图像中的位置。第一人脸的图像可以在第七预览图像的中心区域。
人脸追踪功能可以适用前置拍摄场景,也可以适用后置拍摄场景。即,第一摄像头可以为前置摄像头,也可以为后置摄像头。
进一步的,本申请实施例提供的多路录像的取景方法还可以提供变焦下调整取景的功能。具体的,电子设备在检测到所述第一用户操作之前,还可以检测到第三用户操作。第三用户操作可用于放大变焦倍率,例如双指从捏合变为分开的用户操作。响应于第三用户操作,电子设备可以将第一预览图像放大,并在第一区域中显示放大后的第一预览图像。可以理解的,如果一倍倍率下,第一预览图像和第一区域的尺寸一样大,则放大后的第一预览图像并不能全部都显示在第一区域中,电子设备可以第一区域中显示第一预览图像的部分图像,该部分图像可以在第一预览图像的中心区域。
本申请实施例提供的多路录像的取景方法中,电子设备在检测到所述第一用户操作时,如果所述电子设备的姿态未发生改变,则在所述第一区域中显示所述第一摄像头的第二预览图像。也即是说,在电子设备的姿态未发生改变时,电子设备才会根据第一用户操作去调整摄像头在预览界面中的取景范围。在检测到所述第一用户操作时,如果所述电子设备的姿态发生改变,则所述电子设备可以在所述第一区域中显示所述第一摄像头的第四预览图像,所述第四预览图像可以是通过裁剪所述第一摄像头采集的全部图像得到的,所述第四预览图像的中心位置与所述第一摄像头的全部取景图像的中心位置重合。也即是说,在电子设备的姿态发生改变时,电子设备可以不根据此时检测到的第一用户操作去调整摄像头在预览界面中的取景范围,以便用户通过调整电子设备姿态来改变光学取景。
图15方法实施例中未提及的内容可参考前述UI实施例,这里不再赘述。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种多路录像的取景方法,应用于具有显示屏和M个摄像头的电子设备,M≥2,M为正整数,其特征在于,所述方法包括:
    所述电子设备开启N个摄像头,N≤M,N为正整数;
    所述电子设备通过所述N个摄像头采集图像;
    所述电子设备显示预览界面和所述N个摄像头各自采集的部分或全部图像,所述预览界面包括N个区域,所述N个摄像头各自采集的部分或全部图像分别显示在所述N个区域中;
    所述电子设备检测到在第一区域的第一用户操作,所述第一区域为所述N个区域中的一个区域,第一预览图像显示在所述第一区域中,所述第一预览图像是通过裁剪所述第一摄像头采集的全部图像得到的;
    所述电子设备在所述第一区域中显示第二预览图像,所述第二预览图像也是通过裁剪所述第一摄像头采集的全部图像得到的,在所述第一摄像头采集的全部图像中,所述第二预览图像的位置不同于所述第一预览图像的位置;
    所述电子设备检测到第二用户操作;
    所述电子设备开始录制视频,并显示拍摄界面,所述拍摄界面包括所述N个区域。
  2. 根据权利要求1所述的方法,其特征在于,所述第一用户操作包括滑动操作,在所述第一摄像头采集的全部图像中,所述第一预览图像的中心位置指向所述第二预览图像的中心位置的方向与所述滑动操作的滑动方向相反。
  3. 根据权利要求2所述的方法,其特征在于,如果所述第一用户操作为左滑操作,则所述第二预览图像相较于所述第一预览图像更接近所述第一摄像头采集的全部图像的右边界。
  4. 根据权利要求2或3所述的方法,其特征在于,如果所述第一用户操作为右滑操作,则所述第二预览图像相较于所述第一预览图像更接近所述第一摄像头采集的全部图像的左边界。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述第一预览图像中心位置与所述第一摄像头采集的全部图像的中心位置重合。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述第一预览图像与所述第二预览图像一样大。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,还包括:
    所述电子设备在检测到所述第一用户操作之前,还检测到第三用户操作;
    所述电子设备将所述第一预览图像放大,并在所述第一区域中显示放大后的所述第一预览图像。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,还包括:
    所述电子设备在所述拍摄界面的所述第一区域中检测到第四用户操作;
    所述电子设备在所述拍摄界面的所述第一区域中显示所述第一摄像头的第三预览图像,所述第三预览图像是通过裁剪所述第一摄像头采集的全部图像得到的,在所述第一摄像头采集的全部图像中,所述第三预览图像的位置不同于所述第二预览图像的位置。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备在所述第一区域中显示所述第一摄像头的第二预览图像,具体包括:所述电子设备在检测到所述第一用户操作时,如果所述电子设备的姿态未发生改变,则在所述第一区域中显示所述第一摄像头的第二预览图像;
    所述方法还包括:在检测到所述第一用户操作时,如果所述电子设备的姿态发生改变,则所述电子设备在所述第一区域中显示所述第一摄像头的第四预览图像,所述第四预览图像是通过裁剪所述第一摄像头采集的全部图像得到的,所述第四预览图像的中心位置与所述第一摄像头的全部取景图像的中心位置重合。
  10. 根据权利要求1-8中任一项所述的方法,其特征在于,还包括:
    所述电子设备检测到所述第一摄像头采集的全部图像中包括第一人脸的图像;
    所述电子设备在所述第一区域中显示第五预览图像,所述第五预览图像是通过裁剪所述第一摄像头采集的全部图像得到的,所述第五预览图像包括所述第一人脸的图像;
    所述电子设备检测到所述第一人脸的图像在所述第一摄像头采集的全部图像中的位置发生改变;
    所述电子设备在所述第一区域中显示第六预览图像,所述第六预览图像是通过裁剪所述第一摄像头采集的全部图像得到的,所述第六预览图像包括所述第一人脸的图像。
  11. 根据权利要求10所述的方法,其特征在于,所述第一人脸的图像在所述第六预览图像中的位置同于所述第一人脸的图像在所述第五预览图像中的位置。
  12. 根据权利要求10或11所述的方法,其特征在于,所述第一人脸的图像在所述第五预览图像的中心区域。
  13. 根据权利要求1-8中任一项所述的方法,其特征在于,还包括:
    所述电子设备检测到所述第一摄像头采集的全部图像中包括第一人脸的图像;
    所述电子设备启动第二摄像头,所述第二摄像头的取景范围大于所述第一摄像头的取景范围,所述第一人脸在所述第二摄像头的取景范围内;
    所述电子设备在所述第一区域中显示第七预览图像,所述第七预览图像是通过裁剪所 述第二摄像头采集的全部图像得到的,所述第七预览图像包括所述第一人脸的图像;
    所述电子设备检测到所述第一人脸的图像在所述第二摄像头采集的全部图像中的位置发生改变;
    所述电子设备在所述第一区域中显示第八预览图像,所述第八预览图像是通过裁剪所述第二摄像头采集的全部图像得到的,所述第八预览图像包括所述第一人脸的图像。
  14. 根据权利要求13所述的方法,其特征在于,所述第一人脸的图像在所述第七预览图像中的位置同于所述第一人脸的图像在所述第八预览图像中的位置。
  15. 根据权利要求13或14所述的方法,其特征在于,所述第一人脸的图像在所述第七预览图像的中心区域。
  16. 根据权利要求10-15中任一项所述的方法,其特征在于,所述第一摄像头为前置摄像头或后置摄像头。
  17. 根据权利要求1-16任一项所述的方法,其特征在于,所述方法还包括:
    所述电子设备检测到第五用户操作;
    所述电子设备停止录制视频,并生成视频文件;
    所述电子设备检测到针对所述视频文件的第六用户操作;
    所述电子设备显示播放界面,所述播放界面包括所述N个区域。
  18. 一种多路录像的取景方法,应用于具有显示屏和M个摄像头的电子设备,M≥2,M为正整数,其特征在于,所述方法包括:
    所述电子设备开启N个摄像头,N≤M,N为正整数;
    所述电子设备通过所述N个摄像头采集图像;
    所述电子设备显示预览界面和所述N个摄像头各自采集的部分或全部图像,所述预览界面包括N个区域,所述N个摄像头各自采集的部分或全部图像分别显示在所述N个区域中;
    所述电子设备检测到在第一区域的第七用户操作;
    所述电子设备检测到所述电子设备的姿态发生改变;
    所述电子设备在所述第一区域中显示第九预览图像,所述第九预览图像呈现的取景范围和第十预览图像呈现的取景范围相同,所述第十预览图像为在所述电子设备的姿态发生改变之前显示在所述第一区域中的图像,所述第九预览图像是在所述电子设备的姿态发生改变之后通过裁剪所述第一摄像头采集到的全部图像得到的,所述第十预览图像是在所述电子设备的姿态发生改变之前通过裁剪所述第一摄像头采集到的全部图像得到的;
    所述电子设备检测到第八用户操作;
    所述电子设备开始录制视频,并显示拍摄界面,所述拍摄界面包括所述N个区域。
  19. 一种电子设备,包括:显示屏,M个摄像头,触摸传感器,存储器,一个或多个处理器,多个应用程序,以及一个或多个程序;M≥2,M为正整数;其中所述一个或多个程序被存储在所述存储器中;其特征在于,所述一个或多个处理器在执行所述一个或多个程序时,使得所述电子设备实现如权利要求1至18任一项所述的方法。
  20. 一种计算机设备,包括存储器,处理器以及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时使得所述计算机设备实现如权利要求1至18任一项所述的方法。
  21. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1至18任一项所述的方法。
  22. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1至18任一项所述的方法。
PCT/CN2021/089075 2020-04-22 2021-04-22 多路录像的取景方法、图形用户界面及电子设备 WO2021213477A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/920,601 US11832022B2 (en) 2020-04-22 2021-04-22 Framing method for multi-channel video recording, graphical user interface, and electronic device
BR112022021413A BR112022021413A2 (pt) 2020-04-22 2021-04-22 Método de enquadramento para gravação de vídeo de múltiplos canais, interface gráfica de usuário, e dispositivo eletrônico
EP21792314.3A EP4131926A4 (en) 2020-04-22 2021-04-22 VISION PROCEDURE FOR MULTI-CHANNEL VIDEO RECORDING, GRAPHIC USER INTERFACE AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010324919.1 2020-04-22
CN202010324919.1A CN113542581A (zh) 2020-04-22 2020-04-22 多路录像的取景方法、图形用户界面及电子设备

Publications (1)

Publication Number Publication Date
WO2021213477A1 true WO2021213477A1 (zh) 2021-10-28

Family

ID=78124079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089075 WO2021213477A1 (zh) 2020-04-22 2021-04-22 多路录像的取景方法、图形用户界面及电子设备

Country Status (5)

Country Link
US (1) US11832022B2 (zh)
EP (1) EP4131926A4 (zh)
CN (1) CN113542581A (zh)
BR (1) BR112022021413A2 (zh)
WO (1) WO2021213477A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014801A (zh) * 2021-02-01 2021-06-22 维沃移动通信有限公司 录像方法、装置、电子设备及介质
CN116055868A (zh) * 2022-05-30 2023-05-02 荣耀终端有限公司 一种拍摄方法及相关设备

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9716825B1 (en) 2016-06-12 2017-07-25 Apple Inc. User interface for camera effects
US11112964B2 (en) 2018-02-09 2021-09-07 Apple Inc. Media capture lock affordance for graphical user interface
US11212449B1 (en) * 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
USD992593S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD992592S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
CN114025098A (zh) * 2021-11-25 2022-02-08 Oppo广东移动通信有限公司 图像显示方法、装置、电子设备及计算机可读存储介质
CN114666477B (zh) * 2022-03-24 2023-10-13 重庆紫光华山智安科技有限公司 一种视频数据处理方法、装置、设备及存储介质
CN116112781B (zh) * 2022-05-25 2023-12-01 荣耀终端有限公司 录像方法、装置及存储介质
CN117177064A (zh) * 2022-05-30 2023-12-05 荣耀终端有限公司 一种拍摄方法及相关设备
CN114845059B (zh) * 2022-07-06 2022-11-18 荣耀终端有限公司 一种拍摄方法及相关设备
CN117729419A (zh) * 2022-07-15 2024-03-19 荣耀终端有限公司 终端拍摄方法及终端设备
CN115442526A (zh) * 2022-08-31 2022-12-06 维沃移动通信有限公司 视频拍摄方法、装置和设备
CN117857719A (zh) * 2022-09-30 2024-04-09 北京字跳网络技术有限公司 视频素材剪辑方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104349063A (zh) * 2014-10-27 2015-02-11 东莞宇龙通信科技有限公司 一种控制摄像头拍摄的方法、装置及终端
US20150103222A1 (en) * 2013-10-15 2015-04-16 Samsung Electronics Co., Ltd. Method for adjusting preview area and electronic device thereof
CN107509029A (zh) * 2013-01-07 2017-12-22 华为技术有限公司 一种图像处理方法及装置
CN107809581A (zh) * 2017-09-29 2018-03-16 天津远翥科技有限公司 图像处理方法、装置、终端设备及无人机
CN110072070A (zh) * 2019-03-18 2019-07-30 华为技术有限公司 一种多路录像方法及设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2452033C2 (ru) 2005-01-03 2012-05-27 Опсигал Контрол Системз Лтд. Системы и способы наблюдения в ночное время
US9497380B1 (en) 2013-02-15 2016-11-15 Red.Com, Inc. Dense field imaging
EP3286915B1 (en) 2015-04-23 2021-12-08 Apple Inc. Digital viewfinder user interface for multiple cameras
US9824723B1 (en) * 2015-08-27 2017-11-21 Amazon Technologies, Inc. Direction indicators for panoramic images
CN105282441B (zh) * 2015-09-29 2020-10-27 北京小米移动软件有限公司 拍照方法及装置
KR102666977B1 (ko) 2017-01-09 2024-05-20 삼성전자주식회사 전자 장치 및 전자 장치의 영상 촬영 방법
CN107786812A (zh) * 2017-10-31 2018-03-09 维沃移动通信有限公司 一种拍摄方法、移动终端及计算机可读存储介质
KR102441924B1 (ko) * 2018-03-13 2022-09-08 에스엘 주식회사 카메라 모니터 시스템
CN108495029B (zh) * 2018-03-15 2020-03-31 维沃移动通信有限公司 一种拍照方法及移动终端
US11272116B2 (en) * 2018-07-16 2022-03-08 Honor Device Co., Ltd. Photographing method and electronic device
US11770601B2 (en) * 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
CN110012223A (zh) * 2019-03-22 2019-07-12 深圳技威时代科技有限公司 一种基于图像裁剪实现云台转动的方法及***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107509029A (zh) * 2013-01-07 2017-12-22 华为技术有限公司 一种图像处理方法及装置
US20150103222A1 (en) * 2013-10-15 2015-04-16 Samsung Electronics Co., Ltd. Method for adjusting preview area and electronic device thereof
CN104349063A (zh) * 2014-10-27 2015-02-11 东莞宇龙通信科技有限公司 一种控制摄像头拍摄的方法、装置及终端
CN107809581A (zh) * 2017-09-29 2018-03-16 天津远翥科技有限公司 图像处理方法、装置、终端设备及无人机
CN110072070A (zh) * 2019-03-18 2019-07-30 华为技术有限公司 一种多路录像方法及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4131926A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014801A (zh) * 2021-02-01 2021-06-22 维沃移动通信有限公司 录像方法、装置、电子设备及介质
CN116055868A (zh) * 2022-05-30 2023-05-02 荣耀终端有限公司 一种拍摄方法及相关设备
CN116055868B (zh) * 2022-05-30 2023-10-20 荣耀终端有限公司 一种拍摄方法及相关设备

Also Published As

Publication number Publication date
US11832022B2 (en) 2023-11-28
US20230156144A1 (en) 2023-05-18
BR112022021413A2 (pt) 2022-12-13
CN113542581A (zh) 2021-10-22
EP4131926A1 (en) 2023-02-08
EP4131926A4 (en) 2023-08-16

Similar Documents

Publication Publication Date Title
WO2021213477A1 (zh) 多路录像的取景方法、图形用户界面及电子设备
WO2021093793A1 (zh) 一种拍摄方法及电子设备
CN110072070B (zh) 一种多路录像方法及设备、介质
CN113556461B (zh) 一种图像处理方法、电子设备及计算机可读存储介质
CN114205522B (zh) 一种长焦拍摄的方法及电子设备
WO2020073959A1 (zh) 图像捕捉方法及电子设备
JP7355941B2 (ja) 長焦点シナリオにおける撮影方法および端末
CN113596316B (zh) 拍照方法及电子设备
WO2021013147A1 (zh) 视频处理方法、装置、终端及存储介质
EP4199499A1 (en) Image capture method, graphical user interface, and electronic device
CN113727015A (zh) 一种视频拍摄方法及电子设备
CN115484375A (zh) 拍摄方法及电子设备
EP4236295A1 (en) Image capturing method and electronic device
CN115442509A (zh) 拍摄方法、用户界面及电子设备
WO2021017518A1 (zh) 电子设备及图像处理方法
RU2809660C1 (ru) Способ кадрирования для записи многоканального видео, графический пользовательский интерфейс и электронное устройство
CN117202081A (zh) 音频处理方法及电子设备
CN115484386A (zh) 一种视频拍摄方法及电子设备
CN115484390A (zh) 一种拍摄视频的方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21792314

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022021413

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2021792314

Country of ref document: EP

Effective date: 20221031

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112022021413

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20221021