CN113923351A - Method, apparatus, storage medium, and program product for exiting multi-channel video shooting - Google Patents

Method, apparatus, storage medium, and program product for exiting multi-channel video shooting Download PDF

Info

Publication number
CN113923351A
CN113923351A CN202111064643.9A CN202111064643A CN113923351A CN 113923351 A CN113923351 A CN 113923351A CN 202111064643 A CN202111064643 A CN 202111064643A CN 113923351 A CN113923351 A CN 113923351A
Authority
CN
China
Prior art keywords
video
picture
shooting
exit
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111064643.9A
Other languages
Chinese (zh)
Other versions
CN113923351B (en
Inventor
王燕东
韩林林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111064643.9A priority Critical patent/CN113923351B/en
Publication of CN113923351A publication Critical patent/CN113923351A/en
Application granted granted Critical
Publication of CN113923351B publication Critical patent/CN113923351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides a method, equipment, a storage medium and a program product for quitting multi-channel video shooting, wherein the method comprises the steps of receiving video shooting operation in a multi-channel shooting mode; shooting a first video image according to the video shooting operation, wherein the first video image comprises N paths of video pictures, and N is more than or equal to 2; receiving video image quitting operation, and closing M video images in the N video images, wherein M is more than or equal to 1 and less than N; and shooting a second video image, wherein the second video image comprises N-M video pictures. By adopting the technical scheme provided by the embodiment of the application, in the multi-path shooting mode, one or more paths of video pictures in the multi-path shooting mode can be directly closed, and the user experience is improved.

Description

Method, apparatus, storage medium, and program product for exiting multi-channel video shooting
Technical Field
The present application relates to the field of computer technology, and in particular, to a method, an apparatus, a storage medium, and a program product for exiting multi-channel video shooting.
Background
In order to improve user experience, electronic devices such as mobile phones and tablet computers are usually configured with a plurality of cameras, for example, two front cameras and four rear cameras are respectively configured on the electronic devices. Can provide abundant shooting mode for the user through these two leading cameras and four rear camera. For example, a single-pass shooting mode or a multi-pass shooting mode. The single-channel shooting mode is that one camera is adopted for video shooting, and one channel of video pictures collected by the camera is displayed and/or coded; the multi-path shooting mode is that two or more cameras are adopted for video shooting, and two or more paths of video pictures collected by the two or more cameras are displayed and/or coded.
In practical use, a user may need to turn off one or several video frames in the multi-channel shooting mode during the process of performing video shooting in the multi-channel shooting mode. However, in the prior art, a user can only switch between preset video shooting modes, and cannot flexibly select a certain path or a certain number of paths of video pictures to be closed, so that the user experience is poor.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, a storage medium, and a program product for exiting multi-channel video shooting, so as to solve the problem that in the prior art, a certain channel or several channels of video pictures that need to be closed cannot be flexibly selected in the process of performing video shooting in a multi-channel video shooting mode.
In a first aspect, an embodiment of the present application provides a method for exiting multi-channel video shooting, which is applied to a terminal device, and the method includes:
receiving video shooting operation in a multi-channel shooting mode;
shooting a first video image according to the video shooting operation, wherein the first video image comprises N paths of video pictures, and N is more than or equal to 2;
receiving video image quitting operation, and closing M video images in the N video images, wherein M is more than or equal to 1 and less than N;
and shooting a second video image, wherein the second video image comprises N-M video pictures.
Preferably, before the receiving video picture quitting operation and closing M video pictures in the N video pictures, the method further includes:
and displaying N exit controls corresponding to the N paths of video pictures, wherein the N exit controls correspond to the N paths of video pictures one by one, and the exit controls are used for controlling the closing of the corresponding video pictures.
Preferably, the receiving a video picture quitting operation, and closing M video pictures in the N video pictures includes:
and M exit controls in the N exit controls receive video picture exit operation and close M paths of video pictures corresponding to the M exit controls.
Preferably, the displaying N exit controls corresponding to the N video frames includes:
and receiving a starting exit operation, and displaying N exit controls corresponding to the N paths of video pictures.
Preferably, the display positions of the N exit controls match the display positions of the N video frames.
Preferably, N-M-1, the capturing a second video image includes:
and adjusting the size and/or the position of the N-M video pictures according to an adjustment strategy to obtain a second video image, wherein the adjustment strategy comprises size information and/or position information in the N-M video pictures.
Preferably, N-M is larger than or equal to 2, and the shooting of the second video image comprises the following steps:
and rendering and combining the N-M paths of video pictures according to the texture information and the position information of the N-M paths of video pictures to obtain a second video image.
Preferably, before the rendering and merging the N-M video pictures according to the texture information and the position information of the N-M video pictures, the method further includes:
and adjusting the size and/or the position of at least one video picture in the N-M video pictures according to an adjustment strategy, wherein the adjustment strategy comprises the size information and/or the position information of each video picture in the N-M video pictures.
In a second aspect, embodiments of the present application provide a terminal device, including a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the terminal device to perform the method of any one of the first aspect.
In a third aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes a stored program, where the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method according to any one of the first aspects.
In a fourth aspect, the present application provides a computer program product containing executable instructions that, when executed on a computer, cause the computer to perform the method of any one of the first aspect.
By adopting the technical scheme provided by the embodiment of the application, in the multi-path shooting mode, one or more paths of video pictures in the multi-path shooting mode can be directly closed, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic diagram of a terminal device according to an embodiment of the present application;
fig. 2A is a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application;
fig. 2B is a schematic view of a front-back picture-in-picture mode shooting scene according to an embodiment of the present application;
fig. 2C is a schematic view of a rear pd mode shooting scene according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a scene of switching between shooting modes in the related art;
fig. 4 is a schematic flowchart of an exit method for multi-channel video shooting according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an exit scene of multi-channel video shooting according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an exit scene of multiple paths of video shooting according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an exit scene of multi-channel video shooting according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of an exit scene of multiple video shots according to an embodiment of the present disclosure;
fig. 9A is a schematic view of an adjustment scene of a video frame according to an embodiment of the present disclosure;
fig. 9B is a schematic view of another video frame adjustment scene according to an embodiment of the present application;
fig. 10A is a schematic view of another adjustment scene of a video frame according to an embodiment of the present application;
fig. 10B is a schematic view of another adjustment scene of a video frame according to an embodiment of the present application;
fig. 11 is a block diagram of a software structure of a terminal device according to an embodiment of the present application;
fig. 12 is a flowchart illustrating another exit method for multi-channel video shooting according to an embodiment of the present disclosure;
fig. 13A is a schematic view of a rendered scene according to an embodiment of the present application;
fig. 13B is a schematic diagram of another rendering scene provided in the embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
For better understanding of the technical solutions of the present application, the following detailed descriptions of the embodiments of the present application are provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Referring to fig. 1, a schematic diagram of a terminal device provided in an embodiment of the present application is shown. In fig. 1, a terminal device is exemplified by a mobile phone 100, and fig. 1 shows a front view and a rear view of the mobile phone 100, two front cameras 111 and 112 are arranged on the front side of the mobile phone 100, and four rear cameras 121, 122, 123 and 124 are arranged on the rear side of the mobile phone 100. Through the plurality of cameras that dispose, can provide multiple shooting mode for the user. The user can select a corresponding shooting mode to shoot according to the shooting scene so as to improve the user experience.
It is to be understood that the illustration of fig. 1 is merely an exemplary illustration and should not be taken as a limitation on the scope of the present application. For example, the number and positions of cameras may be different for different mobile phones. In addition, the terminal device according to the embodiment of the present application may be a tablet PC, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable terminal device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, a smart car, a smart audio, a robot, smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, a terminal device may also be referred to as an electronic device, a User Equipment (UE), and the like, which is not limited in this embodiment of the present application.
In some possible implementations, the shooting modes involved in the terminal device may include a single-shot mode and a multi-shot mode.
The single-channel shooting mode adopts a camera to shoot videos, and displays and/or codes one channel of video pictures collected by the camera. For example, a front-end single shot mode, a rear-end single shot mode, and the like. The multi-path shooting mode adopts two or more cameras to carry out video shooting, and displays and/or codes two or more paths of video pictures collected by the two or more cameras. For example, a front-view telephoto mode, a rear-view telephoto mode, a front-back telephoto mode, a front-view picture-in-picture mode, a rear-view picture-in-picture mode, a front-back picture-in-picture mode, and the like.
Specifically, in a front single-shot mode, a front camera is adopted for video shooting; in the rear single-shot mode, a rear camera is adopted for video shooting; in a front double-shooting mode, two front cameras are adopted for video shooting; in a rear double-camera mode, two rear cameras are adopted for video shooting; in a front-back double-shooting mode, a front-mounted camera and a rear-mounted camera are adopted for video shooting; in the front-mounted picture-in-picture mode, two front-mounted cameras are adopted for video shooting, and a picture shot by one front-mounted camera is placed in a picture shot by the other front-mounted camera; in the rear picture-in-picture mode, two rear cameras are adopted for video shooting, and a picture shot by one rear camera is placed in a picture shot by the other rear camera; in the front-back picture-in-picture mode, a front camera and a back camera are adopted for video shooting, and pictures shot by the front camera or the back camera are placed in pictures shot by the back camera or the front camera.
Referring to fig. 2A, a schematic view of a shooting scene in a front-back double-shot mode according to an embodiment of the present application is provided. In a front-back double-shooting mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture and the background picture are simultaneously displayed in a display interface.
Referring to fig. 2B, a schematic view of a front-back picture-in-picture mode shooting scene is provided in the embodiment of the present application. In the front-back picture-in-picture mode, a front-facing camera is used for collecting a foreground picture, a rear-facing camera is used for collecting a background picture, and the foreground picture is placed in the background picture.
Referring to fig. 2C, a schematic view of a rear pd mode shooting scene is provided in an embodiment of the present application. And under the rear picture-in-picture mode, one rear camera is adopted to collect a long-distance view picture, the other rear camera is adopted to collect a short-distance view picture, and the short-distance view picture is arranged in the long-distance view picture.
It should be noted that the above-mentioned shooting modes are only some possible implementations listed in the embodiments of the present application, and those skilled in the art may configure other shooting modes according to actual needs, and the embodiments of the present application do not specifically limit this.
In some possible implementations, the shooting mode may also be described as a single view mode, a dual view mode, and a picture-in-picture mode. The single shot mode can comprise a front single shot mode and a rear single shot mode; the double-scene mode can comprise a front double-shot mode, a rear double-shot mode and a front and rear double-shot mode; the pip mode may include a front pip mode, a rear pip mode, and a front and rear pip mode.
In practical use, a user may need to turn off one or several video frames in the multi-channel shooting mode during the process of performing video shooting in the multi-channel shooting mode.
Referring to fig. 3, a schematic diagram of a shooting mode switching scene in the related art is shown. In the application scenario shown in fig. 3, the shooting mode before switching is a front-back double-shot mode; the switched shooting mode is a front single shooting mode.
As shown in fig. 3A, in an initial state, the terminal device performs video shooting according to a front-back double-shot mode, and collects two paths of video pictures.
As shown in fig. 3B, when a user needs to close a certain path of video picture (for example, the user needs to close a video picture acquired by a rear camera), an area corresponding to the "switching" control is clicked in the display interface, and a switching operation is triggered and started.
As shown in fig. 3C, after the user triggers and starts the switching operation, a shooting mode selection window is displayed in the display interface, the shooting mode selection window includes multiple shooting mode identifiers, and the user can select a shooting mode in the shooting mode selection window. It is understood that the photographing mode referred to in the photographing mode selection window is a photographing mode preset by the system.
As shown in fig. 3D, the user clicks an area corresponding to the "front single shot" mode in the shooting mode selection window, and triggers a shooting mode switching operation for instructing the front and rear double shot mode to be switched to the front single shot mode.
As shown in fig. 3E, after the shooting mode switching is completed, video shooting is performed in the front single shot mode.
It can be understood that, in the scheme shown in fig. 3, when a user needs to close a certain video picture or several video pictures, only a proper shooting mode can be searched in the preset shooting modes, and then the shooting modes are switched, so that the user experience is poor.
Based on this, embodiments of the present application provide a method, an apparatus, a storage medium, and a program product for exiting multi-channel video shooting, so as to facilitate solving a problem that, in a process of performing video shooting in a multi-channel video shooting mode in the prior art, a certain channel or certain channels of video pictures that need to be closed cannot be flexibly selected.
Referring to fig. 4, a schematic flowchart of an exit method for multi-channel video shooting according to an embodiment of the present application is provided. The method can be applied to the terminal device shown in fig. 1, and mainly includes the following steps, as shown in fig. 4.
Step S401: a video capture operation in a multi-pass capture mode is received.
The shooting modes involved in the terminal device may include a one-way shooting mode and a multi-way shooting mode. In practical application, a user can trigger the video shooting operation in the multi-channel shooting mode according to needs. Under the multi-path shooting mode, two or more cameras are used for shooting videos and collecting two or more paths of video pictures. Specifically, the multi-pass photographing mode may include a front-double photographing mode, a rear-double photographing mode, a front-rear double photographing mode, a front-picture-in-picture mode, a rear-picture-in-picture mode, a front-picture-in-picture mode, and the like. This is not particularly limited by the examples of the present application.
Step S402: and shooting a first video image according to the video shooting operation, wherein the first video image comprises N paths of video pictures, and N is more than or equal to 2.
And when the user triggers the video shooting operation in the multi-channel shooting mode, the terminal equipment shoots the video in the multi-channel shooting mode. Under the multi-channel shooting mode, the terminal equipment collects N channels of video pictures and generates a first video image. That is, the first video image comprises N video frames, N ≧ 2.
In specific implementation, the terminal device may combine the N collected video pictures into a first video image according to a merging strategy corresponding to the multi-channel shooting mode, and send the first video image to a display interface for displaying, and/or send the first video image to an encoder for encoding, so as to generate a corresponding video file.
Step S403: and receiving video image quitting operation, and closing M video images in the N video images, wherein M is more than or equal to 1 and less than N.
In the embodiment of the application, when a user needs to close one or more video pictures, the video picture exit operation is directly triggered, and the one or more video pictures needing to be closed in the N video pictures are closed. It can be understood that the number of paths of the video pictures needing to be closed is less than the total number of paths of the video pictures shot in the multi-path shooting mode. For example, when N ═ 2, M ═ 1; when N is 3, M is 1 or 2.
The video picture quitting operation related to the embodiment of the application is matched with the M paths of video pictures needing to be closed, namely the video picture quitting operation and the video pictures needing to be closed have a certain corresponding relation. For example, when the multi-pass photographing mode is a front-rear double-photographing mode, the terminal device photographs a foreground picture and a background picture. The video screen exit operation may correspond to a foreground screen or to a background screen. When the video image quitting operation corresponds to the foreground image, closing the foreground image; and when the video image quitting operation corresponds to the background image, closing the background image.
In some possible implementation manners, in order to improve user experience, N exit controls corresponding to N channels of video pictures are displayed in a display interface of the terminal device, the exit controls correspond to the video pictures one by one, and the corresponding video pictures can be controlled to be closed through the exit controls. For example, in the front-back double-shot mode, the foreground picture can be closed by triggering the exit control corresponding to the foreground picture; and closing the background picture by triggering the corresponding quitting control of the background picture.
To enhance the user experience, the display position of the exit control may be matched with the display position of the corresponding video frame. For example, the display position of the exit control may be set within the display area of the corresponding video screen; or the display position of the exit control is set at one side of the display area of the corresponding video picture. This is not particularly limited by the examples of the present application.
In some possible implementation manners, in order to avoid that the exit control blocks the video picture due to long-time display, the exit control may be displayed only after the start exit operation is received. And after the user triggers the exit control, the exit control cancels the display.
Referring to fig. 5, a schematic view of an exit scene of multi-channel video shooting according to an embodiment of the present disclosure is shown. The multi-path shooting mode related to the embodiment of the application is a front-back double-shot mode. Specifically, the shot video pictures include a foreground picture and a background picture. In the foreground screen, an exit control 501 is displayed, and in the background screen, an exit control 502 is displayed. By adopting the technical scheme provided by the embodiment of the application, the user can close the foreground picture by triggering the quit control 501; or, the exit control 502 is triggered to close the background screen.
Referring to fig. 6, a schematic view of an exit scene of multiple video shots according to an embodiment of the present application is provided. The multi-path shooting mode related to the embodiment of the application is a front-back picture-in-picture mode. Specifically, the shot video pictures comprise a foreground picture and a background picture-in-picture. The exit control 601 is displayed in the foreground picture, and the exit control 602 is displayed in the background picture-in-picture. By adopting the technical scheme provided by the embodiment of the application, a user can close the foreground picture by triggering the quit control 601; alternatively, the exit control 602 is triggered to close the background pip picture.
It should be noted that fig. 5 and fig. 6 are only one possible application scenario listed in the embodiments of the present application, and should not be taken as a limitation to the scope of the present application.
Step S404: and shooting a second video image, wherein the second video image comprises N-M video pictures.
Specifically, after the M-channel video frames are turned off, the video shooting needs to be continued, that is, the shooting of the remaining N-M-channel video frames is continued. That is, the second video image includes the remaining N-M video pictures. For example, in the front-rear double-shot mode, after the background screen is closed, the foreground screen continues to be shot. The following description is provided in connection with specific application scenarios.
Referring to fig. 7, a schematic view of an exit scene of multi-channel video shooting according to an embodiment of the present disclosure is shown. In the exit scene shown in fig. 7, the multi-pass photographing mode is a front-rear double-shot mode.
As shown in fig. 7A of fig. 7, in the initial state, the terminal device performs video shooting according to the front-back double-shot mode, and shoots two paths of video pictures, i.e., a foreground picture and a background picture.
As shown in fig. 7B in fig. 7, when the user needs to exit the background screen, the area corresponding to the "exit" control is clicked in the display interface, and the exit operation is triggered and started. And when the user triggers and starts the quitting operation, displaying the quitting control in the display interface. Specifically, an exit control 701 and an exit control 702 are respectively displayed, wherein the exit control 701 corresponds to a foreground picture and is used for controlling the foreground picture to be closed; the exit control 702 corresponds to the background screen and is used for controlling the background screen to close.
As shown in fig. 7C, the user may select to trigger the exit control 701 or the exit control 702 as desired. In the embodiment of the present application, the user triggers the exit control 702 to close the background screen.
After closing the foreground picture, the terminal device only takes the foreground picture and displays and/or encodes the foreground picture, as shown in 7D in fig. 7.
Referring to fig. 8, a schematic view of an exit scene of multiple video shots according to an embodiment of the present application is provided. In the exit scenario shown in fig. 8, the multi-pass shooting mode is a front-to-back picture-in-picture mode.
As shown in fig. 8A in fig. 8, in an initial state, the terminal device performs video shooting according to a front-back picture-in-picture mode, and shoots two paths of video pictures, i.e., a foreground picture and a background picture-in-picture. It can be understood that the background pip picture is located within the foreground picture.
As shown in 8B in fig. 8, when the user needs to quit the middle drawing of the background drawing, the area corresponding to the "quit" control is clicked in the display interface, and the quit operation is triggered and started. And when the user triggers and starts the quitting operation, displaying the quitting control in the display interface. Specifically, an exit control 801 and an exit control 802 are respectively displayed, wherein the exit control 801 corresponds to a foreground picture and is used for controlling the foreground picture to be closed; the exit control 802 corresponds to a background pip picture and is used to control the closing of the background pip picture.
As shown in fig. 8C, the user may select to trigger the exit control 801 or the exit control 802 as desired. In the embodiment of the present application, the user triggers the exit control 802 to close the background pip picture.
As shown in fig. 8D, after the background pip picture is closed, the terminal device only takes a foreground picture, and displays and/or encodes the foreground picture.
By adopting the technical scheme provided by the embodiment of the application, in the multi-path shooting mode, one or more paths of video pictures in the multi-path shooting mode can be directly closed, and the user experience is improved.
In practical applications, after a certain video frame or a certain plurality of video frames in the multi-channel shooting mode are closed, the size and/or position of the remaining video frames may need to be adjusted correspondingly due to the change of the number of the video frames. For example, in the application scenario illustrated in fig. 7, after the background screen is closed, the foreground screen needs to be enlarged, so that the foreground screen is covered in the whole display area, and the user experience is improved.
In addition, after a certain video picture or several video pictures in the multi-path shooting mode are closed, if two or more video pictures remain, rendering and merging processing needs to be performed on the two or more video pictures in the shooting process, and the following description is performed in two situations according to the number of the remaining video pictures.
In some possible implementations, if one of the remaining video frames, that is, N-M is 1 after exiting the M video frames, capturing a second video image includes: and adjusting the size and/or the position of the N-M video pictures according to an adjustment strategy to obtain a second video image, wherein the adjustment strategy comprises size information and/or position information in the N-M video pictures. Of course, in some possible implementations, the size and the position of the remaining video frames may not need to be adjusted, and the embodiment of the present application does not specifically limit this.
The adjusting of the size of the video frame may be understood as adjusting a view range of the video frame, or performing zooming, shrinking, or clipping on the video frame based on the original video frame, which is not specifically limited in this embodiment of the present application.
Referring to fig. 9A, a schematic view of a scene adjustment of a video frame according to an embodiment of the present disclosure is shown. In the application scenario illustrated in fig. 9A, the multi-path shooting mode is a front-back double-shot mode, and the foreground picture and the background picture are shot in the front-back double-shot mode. Wherein, the size of the foreground picture and the size of the background picture are both 1080 x 960. And when the background picture is closed, adjusting the size of the foreground picture. Specifically, the size of the foreground picture is adjusted from 1080 × 960 to 1080 × 1920, so that the foreground picture can be fully distributed on the display interface, and the user experience is improved.
In addition, if the position of the upper left corner of the picture in the display interface is defined for the position of the picture in the display interface, in the embodiment of the application, only the size of the foreground picture needs to be adjusted, and the position of the foreground picture is not adjusted.
Referring to fig. 9B, a schematic view of another adjustment scene of a video frame according to an embodiment of the present application is shown. In the application scenario illustrated in fig. 9B, the multi-path shooting mode is a front-back double-shot mode, and the foreground picture and the background picture are shot in the front-back double-shot mode. Wherein, the size of the foreground picture and the size of the background picture are both 1080 x 960. And after the foreground picture is closed, adjusting the size of the background picture. Specifically, the size of the background picture is adjusted from 1080 × 960 to 1080 × 1920, so that the display interface can be fully covered by the background picture, and the user experience is improved.
In addition, if the position of the picture in the display interface is defined by the position of the upper left corner of the picture in the display interface, in the embodiment of the present application, the size and the position of the background picture need to be adjusted at the same time. Specifically, the position of the background picture is adjusted from point B to point a.
It should be noted that, a person skilled in the art may also use other ways to define the position of the picture in the display interface, and this is not specifically limited by the embodiment of the present application.
In some possible implementation manners, if two or more video pictures are left after exiting the M video pictures, that is, N-M is greater than or equal to 2, shooting a second video image, including: and rendering and combining the N-M paths of video pictures according to the texture information and the position information of the N-M paths of video pictures to obtain a second video image.
In some possible implementation manners, before performing rendering and merging processing on the N-M video pictures, the size and/or position of at least one of the N-M video pictures needs to be adjusted, and after the adjustment is completed, the adjusted N-M video pictures are rendered and merged according to texture information and position information of the adjusted N-M video pictures, so as to obtain a second video image.
Referring to fig. 10A, a schematic view of another adjustment scene of a video frame according to an embodiment of the present application is provided. In the application scenario shown in fig. 10A, the multi-path shooting mode shoots three paths of video pictures, namely, a first video picture, a second video picture and a third video picture, which are displayed in parallel. The first video picture and the second video picture have a size of 540 x 960, and the third video picture has a size of 1080 x 960. And when the first video picture is closed, the second video picture and the third video picture are remained. At this time, the size and position of the second video picture are adjusted, and the size of the second video picture is adjusted from 540 × 960 to 1080 × 960. And then, rendering and combining the second video picture and the third video picture according to the texture information and the position information of the second video picture and the third video picture to obtain a second image, wherein the second video picture and the third video picture in the second image are displayed in parallel.
In the application scene, because the second video picture and the third video picture exist at the same time, the second video picture and the third video picture need to be rendered and merged according to the texture information and the position information of the second video picture and the third video picture, so as to obtain a second image.
Referring to fig. 10B, a schematic view of another adjustment scene of a video frame according to an embodiment of the present application is shown. In the application scenario shown in fig. 10B, the multi-path shooting mode shoots three paths of video pictures, which are respectively a first video picture, a second video picture and a third video picture, wherein the first video picture and the second video picture are located within the third video picture, that is, the first video picture and the second video picture are picture-in-picture pictures. And when the second video picture is closed, the first video picture and the third video picture are remained. And rendering and combining the first video picture and the third video picture according to the texture information and the position information of the first video picture and the third video picture to obtain a second image.
It will be appreciated that the size and position of the first and third video pictures are not adjusted in this process. Of course, those skilled in the art can adjust the size and the position of the first video picture and the third video picture according to actual needs, and the embodiment of the present application does not specifically limit this.
Referring to fig. 11, a block diagram of a software structure of a terminal device according to an embodiment of the present application is provided. The software architecture of the present embodiment is merely an example, and may be applied to other operating systems. The layered architecture in this embodiment divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android (Android) system is divided into four layers, an application layer, a framework layer, a hardware abstraction layer, and a hardware layer from top to bottom.
An Application layer (App) may comprise a series of Application packages. For example, the application package may include a camera application. The application layer can be divided into a display interface and application logic.
The display interface of the camera application includes a monoscopic mode, a dual-scene mode, a picture-in-picture mode, and the like. Wherein, only one video picture is displayed in the monoscopic mode; displaying two video pictures in parallel in a double-scene mode; two video pictures are displayed in a picture-in-picture mode, wherein one shot is located within the other shot.
The application logic of the camera application includes a multi-shot framework. The multi-shooting frame comprises a switching control module, a coordinate management module, a display drawing module and a multi-shooting coding module. The switching control module is used for quitting one or more paths of video pictures; the coordinate management module is used for managing the coordinates displayed by the video picture or the control corresponding to the video picture in the display interface; the display drawing module is used for drawing the content to be displayed; multi-shot coding is used to code a captured video image. It can be appreciated that typically the encoded video image matches the displayed video image.
The Framework layer (FWK) provides an Application Programming Interface (API) and a programming Framework for applications at the application layer, including some predefined functions. In fig. 11, the framework layer includes a Camera access interface (Camera2 API), and the Camera2 API is a set of interfaces for accessing a Camera device, which is derived by Android, and adopts a pipelined design to enable data flow from the Camera to the Surface. The Camera2 API includes Camera management (Camera manager) and Camera device (Camera device). The Camera manager is a management class of the Camera device, and the Camera device information of the device can be queried through the class object to obtain a Camera device object. The CameraDevice provides a series of fixed parameters related to the Camera device, such as the basic setup and output format.
A Hardware Abstraction Layer (HAL) is an interface layer between the operating system kernel and the hardware circuitry, which is intended to abstract the hardware. It hides the hardware interface details of specific platform, provides virtual hardware platform for operation system, makes it have hardware independence, and can be transplanted on several platforms. In fig. 11, the HAL includes a Camera hardware abstraction layer (Camera HAL) including a Camera (Device)1, a Camera (Device)2, a Camera (Device)3, and the like. It is understood that the devices 1, 2, and 3 are abstract cameras.
The HardWare layer (HardWare, HW) is the HardWare located at the lowest level of the operating system. In fig. 11, HW includes a camera (CameraDevice)1, a camera (CameraDevice)2, a camera (CameraDevice)3, and the like. Among them, the CameraDevice1, the CameraDevice2, and the CameraDevice3 may correspond to a plurality of cameras on the terminal device.
Referring to fig. 12, a schematic flowchart of another exit method for multi-channel video shooting according to an embodiment of the present application is provided. The method can be applied to the software structure shown in fig. 11, which mainly includes the following steps, as shown in fig. 12.
Step S1201: triggering and starting the quit operation.
According to the embodiment of the application, in the initial state, the terminal equipment carries out video shooting in the double-scene mode. It can be understood that the video images shot in the dual-view mode include two video images, namely a foreground image and a background image.
When a user needs to quit a certain video picture, triggering and starting a quit operation in a display interface.
Step S1202: and the coordinate management module calculates the coordinates of the exit control.
And when the terminal equipment receives a start exit operation triggered by a user, calculating the coordinates of an exit control through a coordinate management module, wherein the exit control is an exit control to be displayed. The exit control coordinates correspond to a display position of the exit control in the display interface.
For example, in the dual-view mode, two exit controls need to be displayed, which are an exit control corresponding to the foreground screen and an exit control corresponding to the background screen. The display position of the exit control corresponding to the foreground picture is positioned in the display area of the foreground picture; the display position of the corresponding exit control of the background picture is positioned in the display area of the background picture.
Step S1203: and the coordinate management module sends the coordinates of the quitting control to the display drawing module.
And after determining the coordinates of the quitting control, the coordinate management module sends the coordinates of the quitting control to the display drawing module so that the display drawing module draws the quitting control at the position corresponding to the display interface according to the coordinates of the quitting control.
Step S1204: and the display drawing module draws and displays the quit control.
And after receiving the exit control coordinate, the display drawing module draws the exit control at a position corresponding to the exit control coordinate and displays the drawn exit control in the display interface.
Step S1205: and triggering an exit control corresponding to the background picture.
It can be understood that, at this time, two exit controls (refer to fig. 5) are included in the display interface in the dual view mode, and the two exit controls are respectively located in the display areas of the foreground view and the background view.
The user can click the quit control in the display area of the foreground picture to quit the foreground picture; or clicking an exit control in the display area of the background picture to exit the background picture.
In the embodiment of the present application, an exit control corresponding to a background screen is triggered as an example for explanation.
Step S1206: and the switching control module sends a background disconnection instruction.
In the concrete implementation, after a user triggers a corresponding quit control of the background picture, the switching control module sends a background disconnection instruction to the framework layer, and the background disconnection instruction is sequentially sent to the hardware abstraction layer and the hardware layer, so that the framework layer, the hardware abstraction layer and the hardware layer are respectively disconnected from the background.
Step S1207: and the multi-shooting coding module codes the shot foreground picture.
In the specific implementation, when the background picture is closed, only the foreground picture is left, and the multi-camera coding module codes the shot foreground picture to generate a corresponding video file.
In some possible implementations, the position and size of the foreground image may also need to be adjusted, which may specifically refer to the description of the embodiment shown in fig. 9A and 9B, and is not described herein again.
It should be noted that, when the position of the foreground frame is adjusted, the coordinates of the foreground frame may be calculated by the coordinate management module. When the size of the foreground picture is adjusted, the size of the foreground picture can be adjusted through the display drawing module; or the switching control module controls the hardware abstraction layer to report the foreground picture with the size required by the upper layer, which is not specifically limited in the embodiment of the present application.
Step S1208: and the display drawing module draws the shot foreground picture.
In specific implementation, when the background picture is closed, only the foreground picture is left, and the display drawing module draws the shot foreground picture.
In some possible implementations, the position and size of the foreground frame may need to be adjusted before rendering for display. For a specific adjustment manner, reference may be made to the description in step S1207, which is not described herein again. In addition, the display rendering module may render by calling an Open GL renderer, which is described in detail below.
Step S1209: the display interface displays the foreground picture.
Specifically, after the display drawing module finishes drawing the foreground picture, the drawn foreground picture is sent to a display interface to be displayed.
In one possible implementation, the display rendering module may call an Open Graphics Library (OpenGL) renderer to perform rendering processing on the video data.
Referring to fig. 13A, a rendering scene schematic diagram provided in the embodiment of the present application is shown. In order to display and encode video images respectively, two rendering engines are generally provided, namely an Open GL display rendering engine and an Open GL encoding rendering engine. The Open GL display rendering engine and the Open GL encoding rendering engine may invoke an Open GL renderer to implement rendering processing of an image.
In the monoscopic mode, the Open GL display rendering engine may monitor one video image through the first monitoring module and the second monitoring module, respectively, where one of the video images monitored by the two monitoring modules is used as a display rendering and the other is used as a coding rendering. Of course, it is also possible to monitor the video image by using only one monitoring module, display and render the monitored video image, and encode and render the video image after rendering the transition image. The method comprises the following specific steps:
the Open GL display rendering engine monitors the video images collected by the first camera through the first monitoring module and the second monitoring module respectively. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, the Open GL renderer transmits the video image monitored by the first monitoring module to the display cache area for caching, the Open GL display rendering engine transmits the video image monitored by the second monitoring module to the Open GL renderer, and the Open GL renderer transmits the video image monitored by the second monitoring module to the coding cache area. And transmitting the video image buffered in the display buffer area to a display interface (SurfaceView), and displaying the video image in the display interface. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the terminal device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video images monitored by the first monitoring module and the second monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache region and the video image monitored by the second monitoring module to the encoding cache region without passing through the Open GL renderer, which is not limited in the present application.
In a double-view mode or a picture-in-picture mode, the Open GL display rendering engine monitors video images collected by the first camera and the second camera respectively through the first monitoring module and the second monitoring module, and transmits the monitored two paths of video images and a synthesis strategy to the Open GL renderer. And the Open GL renderer synthesizes the two paths of video images into one video image according to a synthesis strategy, and transmits the video image to a display cache region for caching. And respectively transmitting the video images cached in the display cache region to a display interface and the coding cache region. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, in the above process, except that the video file generated by the encoding module is in the MP4 format, other video images are in the RGB format. That is to say, the Open GL displays that the video image monitored by the rendering engine is an image in RGB format, and the video image output after the Open GL renderer renders and synthesizes is also in RGB format. That is, the video image cached in the display buffer is in RGB format, and the video image sent to the photographed video image and the encoding buffer is also in RGB format. The Open GL coding rendering engine acquires a video image in an RGB format, and performs related rendering on the video image according to an image rendering instruction input by a user, wherein the obtained rendered video image is in the RGB format. The video image received by the coding module is in an RGB format, and the video image in the RGB format is coded to generate a video file in an MP4 format.
Referring to fig. 13B, another rendering scene schematic diagram provided in the embodiment of the present application is shown. The difference from fig. 13A is that in the monoscopic mode, the Open GL display rendering engine can monitor only one video image of the terminal device through one monitoring module. For example, the Open GL display rendering engine monitors a video image captured by the first camera through the first monitoring module. The Open GL display rendering engine transmits the video image monitored by the first monitoring module to the Open GL renderer, and the Open GL renderer transmits the acquired video image to the display cache area for caching. And transmitting the video images cached in the display cache region to a display interface. The video image is displayed in the display interface and transmitted to the encoding buffer. The Open GL coding rendering engine acquires a video image in a coding cache region, performs related rendering on the video image root, for example, performs beauty processing on the video image, or adds a watermark in the video image, and sends the rendered video image to a coding module so that the coding module performs corresponding coding processing to generate a video file.
It should be noted that, when the terminal device performs video shooting through a single camera, since special rendering processing on a video image is not required, the video image monitored by the first monitoring module of the Open GL display rendering engine may also be directly transmitted to the display cache area without passing through the Open GL renderer, which is not limited in this application.
It should be noted that, in fig. 13A and 13B, the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the monoscopic mode are the same as the Open GL display rendering engine, the Open GL renderer, and the display buffer area in the dual-scene mode. For convenience of illustration, in fig. 13A and 13B, the Open GL display rendering engine, the Open GL renderer, and the display buffer are drawn in both the single view mode and the dual view mode.
In particular, data sharing may be achieved between the Open GL display rendering engine and the Open GL encoding rendering engine through SharedContext.
Corresponding to the above method embodiments, the present application also provides a terminal device, which is used for a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the terminal device is triggered to execute some or all of the steps in the above method embodiments.
Fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 14, the terminal device 1400 may include a processor 1410, an external memory interface 1420, an internal memory 1421, a Universal Serial Bus (USB) interface 1430, a charging management module 1440, a power management module 1441, a battery 1442, an antenna 1, an antenna 2, a mobile communication module 1450, a wireless communication module 1460, an audio module 1470, a speaker 1470A, a receiver 1470B, a microphone 1470C, an earphone interface 1470D, a sensor module 1480, buttons 1490, a motor 1491, an indicator 1492, a camera 1493, a display 1494, and a Subscriber Identification Module (SIM) card interface 1495, and the like. Wherein the sensor module 1480 may include a pressure sensor 1480A, a gyroscope sensor 1480B, an air pressure sensor 1480C, a magnetic sensor 1480D, an acceleration sensor 1480E, a distance sensor 1480F, a proximity light sensor 1480G, a fingerprint sensor 1480H, a temperature sensor 1480J, a touch sensor 1480K, an ambient light sensor 1480L, a bone conduction sensor 1480M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the terminal device 1400. In other embodiments of the present application, terminal device 1400 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 1410 may include one or more processing units, such as: the processor 1410 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 1410 for storing instructions and data. In some embodiments, the memory in the processor 1410 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 1410. If the processor 1410 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 1410, thereby increasing the efficiency of the system.
In some embodiments, processor 1410 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 1410 may include multiple sets of I2C buses. The processor 1410 may be coupled to the touch sensor 1480K, charger, flash, camera 1493, etc. through different I2C bus interfaces. For example: the processor 1410 may be coupled to the touch sensor 1480K via an I2C interface, such that the processor 1410 and the touch sensor 1480K communicate via an I2C bus interface to enable touch functionality of the terminal device 1400.
The I2S interface may be used for audio communication. In some embodiments, processor 1410 may include multiple sets of I2S buses. Processor 1410 may be coupled to audio module 1470 via an I2S bus, enabling communication between processor 1410 and audio module 1470. In some embodiments, the audio module 1470 can communicate audio signals to the wireless communication module 1460 via the I2S interface, enabling answering calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 1470 and wireless communication module 1460 may be coupled by a PCM bus interface. In some embodiments, the audio module 1470 may also transmit audio signals to the wireless communication module 1460 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 1410 with the wireless communication module 1460. For example: the processor 1410 communicates with a bluetooth module in the wireless communication module 1460 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 1470 may transmit an audio signal to the wireless communication module 1460 through a UART interface, so as to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 1410 with peripheral devices such as a display 1494, a camera 1493, etc. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 1410 and camera 1493 communicate over a CSI interface to implement the capture functionality of terminal device 1400. The processor 1410 and the display 1494 communicate via the DSI interface to implement the display function of the terminal 1400.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 1410 with the camera 1493, the display 1494, the wireless communication module 1460, the audio module 1470, the sensor module 1480, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 1430 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB type c interface, or the like. The USB interface 1430 may be used to connect a charger to charge the terminal device 1400, or may be used to transmit data between the terminal device 1400 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other terminal devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only an exemplary illustration, and does not form a structural limitation on the terminal device 1400. In other embodiments of the present application, the terminal device 1400 may also adopt different interface connection manners or a combination of multiple interface connection manners in the foregoing embodiments.
The charging management module 1440 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 1440 may receive charging input from a wired charger via the USB interface 1430. In some wireless charging embodiments, the charging management module 1440 may receive wireless charging input through a wireless charging coil of the terminal device 1400. The charging management module 1440 can charge the battery 1442 and supply power to the terminal device through the power management module 1441.
The power management module 1441 is used to connect the battery 1442, the charging management module 1440 and the processor 1410. The power management module 1441 receives input from the battery 1442 and/or the charging management module 1440, and provides power to the processor 1410, the internal memory 1421, the display 1494, the camera 1493, and the wireless communication module 1460. The power management module 1441 may also be used to monitor parameters such as battery capacity, battery cycle number, battery state of health (leakage, impedance), etc. In other embodiments, a power management module 1441 may also be disposed in the processor 1410. In other embodiments, the power management module 1441 and the charging management module 1440 may be disposed in the same device.
The wireless communication function of the terminal device 1400 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1450, the wireless communication module 1460, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal device 1400 can be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 1450 may provide a solution including 2G/3G/4G/5G wireless communication applied on the terminal device 1400. The mobile communication module 1450 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 1450 may receive electromagnetic waves from the antenna 1, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 1450 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 1450 may be disposed in the processor 1410. In some embodiments, at least some of the functional blocks of the mobile communication module 1450 may be provided in the same device as at least some of the blocks of the processor 1410.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 1470A, the receiver 1470B, etc.) or displays an image or video through the display 1494. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 1410, and may be located in the same device as the mobile communication module 1450 or other functional modules.
The wireless communication module 1460 may provide solutions for wireless communication applied to the terminal device 1400, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and so on. The wireless communication module 1460 may be one or more devices integrating at least one communication processing module. The wireless communication module 1460 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 1410. The wireless communication module 1460 may also receive a signal to be transmitted from the processor 1410, frequency modulate it, amplify it, and convert it into electromagnetic waves via the antenna 2 to radiate it out.
In some embodiments, antenna 1 of terminal device 1400 is coupled to mobile communication module 1450 and antenna 2 is coupled to wireless communication module 1460, such that terminal device 1400 can communicate with networks and other devices via wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal device 1400 implements a display function through the GPU, the display screen 1494, and the application processor. The GPU is a microprocessor for image processing, connected to the display 1494 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 1410 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 1494 is used to display images, video, and the like. The display 1494 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, terminal device 1400 may include 1 or N display screens 1494, N being a positive integer greater than 1.
The terminal device 1400 may implement a shooting function through the ISP, the camera 1493, the video codec, the GPU, the display 1494, the application processor, and the like.
The ISP is used to process the data fed back by the camera 1493. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 1493.
The camera 1493 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, terminal device 1400 may include 1 or N cameras 1493, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the terminal device 1400 selects a frequency point, the digital signal processor is used to perform fourier transform or the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. Terminal device 1400 may support one or more video codecs. In this way, the terminal device 1400 can play or record videos in a plurality of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal device 1400, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 1420 may be used to connect an external memory card, such as a MicroSD card, to extend the memory capability of the terminal device 1400. The external memory card communicates with the processor 1410 through an external memory interface 1420 to implement data storage functions. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 1421 may be used to store computer-executable program code, which includes instructions. The internal memory 1421 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the terminal apparatus 1400, and the like. In addition, the internal memory 1421 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 1410 performs various functional applications of the terminal apparatus 1400 and data processing by executing instructions stored in the internal memory 1421 and/or instructions stored in a memory provided in the processor.
The terminal device 1400 may implement an audio function through the audio module 1470, the speaker 1470A, the receiver 1470B, the microphone 1470C, the earphone interface 1470D, the application processor, and the like. Such as music playing, recording, etc.
The audio module 1470 is used to convert digital audio information into an analog audio signal output and also used to convert an analog audio input into a digital audio signal. The audio module 1470 may also be used to encode and decode audio signals. In some embodiments, the audio module 1470 may be disposed in the processor 1410, or some functional modules of the audio module 1470 may be disposed in the processor 1410.
The speaker 1470A, also referred to as a "horn," is used to convert electrical audio signals into acoustic signals. The terminal apparatus 1400 can listen to music through the speaker 1470A or listen to a handsfree call.
A receiver 1470B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the terminal apparatus 1400 answers a call or voice information, it is possible to answer a voice by bringing the receiver 1470B close to the human ear.
The microphone 1470C, also referred to as a "microphone", is used to convert sound signals into electrical signals. When making a call or sending voice information, the user can input a voice signal into the microphone 1470C by speaking the user's mouth near the microphone 1470C. Terminal device 1400 may be provided with at least one microphone 1470C. In other embodiments, the terminal device 1400 may be provided with two microphones 1470C, which may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal device 1400 may further include three, four, or more microphones 1470C to collect a sound signal, reduce noise, identify a sound source, and implement a directional recording function.
The headset interface 1470D is used to connect wired headsets. The earphone interface 1470D may be the USB interface 1430, or may be a 3.5mm open mobile terminal equipment platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1480A is configured to sense a pressure signal, which may be converted to an electrical signal. In some embodiments, the pressure sensor 1480A may be disposed on the display 1494. The pressure sensor 1480A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 1480A, the capacitance between the electrodes changes. The terminal apparatus 1400 determines the intensity of the pressure from the change in the capacitance. When a touch operation is applied to the display screen 1494, the terminal device 1400 detects the intensity of the touch operation according to the pressure sensor 1480A. The terminal device 1400 may also calculate the touched position from the detection signal of the pressure sensor 1480A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 1480B may be used to determine the motion attitude of the terminal device 1400. In some embodiments, the angular velocity of terminal device 1400 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 1480B. The gyro sensor 1480B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyro sensor 1480B detects a shake angle of the terminal device 1400, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the terminal device 1400 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 1480B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 1480C is used to measure air pressure. In some embodiments, the terminal device 1400 calculates altitude, aiding positioning and navigation, from the barometric pressure value measured by the barometric pressure sensor 1480C.
The magnetic sensor 1480D includes a hall sensor. The terminal device 1400 may detect the opening and closing of the flip holster using the magnetic sensor 1480D. In some embodiments, when the terminal device 1400 is a flip phone, the terminal device 1400 may detect the opening and closing of the flip according to the magnetic sensor 1480D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 1480E may detect the magnitude of acceleration of the terminal apparatus 1400 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the terminal device 1400 is stationary. The method can also be used for recognizing the posture of the terminal equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1480F for measuring distance. The terminal device 1400 may measure the distance by infrared or laser. In some embodiments, shooting a scene, terminal device 1400 may utilize range sensor 1480F to range for fast focus.
The proximity light sensor 1480G may include, for example, a Light Emitting Diode (LED) and a photodetector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal device 1400 emits infrared light to the outside through the light emitting diode. The terminal device 1400 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 1400. When insufficient reflected light is detected, terminal device 1400 may determine that there are no objects near terminal device 1400. The terminal device 1400 can detect that the user holds the terminal device 1400 close to the ear by using the proximity light sensor 1480G, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 1480G may also be used in holster mode, pocket mode automatically unlock and lock screen.
The ambient light sensor 1480L is used to sense ambient light levels. Terminal device 1400 may adaptively adjust the brightness of display 1494 based on the perceived ambient light level. The ambient light sensor 1480L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 1480L may also cooperate with the proximity light sensor 1480G to detect whether the terminal device 1400 is in a pocket to prevent inadvertent contact.
The fingerprint sensor 1480H is used to capture a fingerprint. The terminal device 1400 may utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call with the fingerprint, and so on.
The temperature sensor 1480J is used to detect temperature. In some embodiments, terminal device 1400 implements a temperature processing strategy using the temperature detected by temperature sensor 1480J. For example, when the temperature reported by the temperature sensor 1480J exceeds a threshold, the terminal device 1400 performs a reduction in performance of a processor located near the temperature sensor 1480J in order to reduce power consumption to implement thermal protection. In other embodiments, terminal device 1400 heats battery 1442 when the temperature is below another threshold to avoid a low temperature causing terminal device 1400 to shutdown abnormally. In other embodiments, terminal device 1400 performs a boost on the output voltage of battery 1442 when the temperature is below a further threshold value to avoid abnormal shutdown due to low temperature.
Touch sensor 1480K, also referred to as a "touch device". The touch sensor 1480K may be disposed on the display screen 1494, and the touch sensor 1480K and the display screen 1494 form a touch screen, which is also referred to as a "touch screen". The touch sensor 1480K is used to detect a touch operation applied thereto or therearound. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display 1494. In other embodiments, the touch sensor 1480K may be disposed on a surface of the terminal device 1400 at a different location than the display 1494.
The bone conduction sensor 1480M may acquire a vibration signal. In some embodiments, the bone conduction sensor 1480M may acquire a vibration signal of the human vocal part vibrating a bone mass. The bone conduction sensor 1480M may also contact the body pulse to receive a blood pressure pulse signal. In some embodiments, a bone conduction sensor 1480M may also be provided in the headset, integrated into a bone conduction headset. The audio module 1470 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part obtained by the bone conduction sensor 1480M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure pulsation signal acquired by the bone conduction sensor 1480M, so as to realize the heart rate detection function.
The keys 1490 include a power-on key, a volume key, etc. The keys 1490 may be mechanical keys. Or may be touch keys. The terminal device 1400 may receive a key input, and generate a key signal input related to user setting and function control of the terminal device 1400.
The motor 1491 may generate a vibration cue. The motor 1491 can be used for incoming call vibration prompt and also for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 1491 may also respond to different vibration feedback effects for touch operations applied to different areas of the display 1494. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 1492 may be an indicator light, and may be used to indicate a charging status, a change in power, or a message, a missed call, a notification, etc.
The SIM card interface 1495 is used for connecting a SIM card. The SIM card can be attached to and detached from the terminal device 1400 by being inserted into the SIM card interface 1495 or being pulled out of the SIM card interface 1495. The terminal device 1400 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 1495 can support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 1495 allows multiple cards to be inserted simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 1495 is also compatible with different types of SIM cards. The SIM card interface 1495 is also compatible with external memory cards. The terminal device 1400 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the terminal device 1400 employs esims, namely: an embedded SIM card. The eSIM card may be embedded in the terminal device 1400 and cannot be separated from the terminal device 1400.
In a specific implementation manner, the present application further provides a computer storage medium, where the computer storage medium may store a program, and when the program runs, the computer storage medium controls a device in which the computer readable storage medium is located to perform some or all of the steps in the foregoing embodiments. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
In a specific implementation, an embodiment of the present application further provides a computer program product, where the computer program product includes executable instructions, and when the executable instructions are executed on a computer, the computer is caused to perform some or all of the steps in the foregoing method embodiments.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided by the present invention, any function, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present invention, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the protection scope of the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. A multi-channel video shooting exit method is applied to terminal equipment and comprises the following steps:
receiving video shooting operation in a multi-channel shooting mode;
shooting a first video image according to the video shooting operation, wherein the first video image comprises N paths of video pictures, and N is more than or equal to 2;
receiving video image quitting operation, and closing M video images in the N video images, wherein M is more than or equal to 1 and less than N;
and shooting a second video image, wherein the second video image comprises N-M video pictures.
2. The method according to claim 1, wherein before the receiving video picture quitting operation, closing M of the N video pictures, further comprising:
and displaying N exit controls corresponding to the N paths of video pictures, wherein the N exit controls correspond to the N paths of video pictures one by one, and the exit controls are used for controlling the closing of the corresponding video pictures.
3. The method of claim 2, wherein said receiving a video picture exit operation to close M of said N video pictures comprises:
and M exit controls in the N exit controls receive video picture exit operation and close M paths of video pictures corresponding to the M exit controls.
4. The method of claim 2, wherein displaying N exit controls corresponding to the N video frames comprises:
and receiving a starting exit operation, and displaying N exit controls corresponding to the N paths of video pictures.
5. The method of claim 2, wherein the display positions of the N exit controls match the display positions of the N video frames.
6. The method according to any one of claims 1-5, wherein N-M-1, said capturing a second video image comprises:
and adjusting the size and/or the position of the N-M video pictures according to an adjustment strategy to obtain a second video image, wherein the adjustment strategy comprises size information and/or position information in the N-M video pictures.
7. The method of any one of claims 1-5, wherein N-M ≧ 2, the capturing a second video image, comprising:
and rendering and combining the N-M paths of video pictures according to the texture information and the position information of the N-M paths of video pictures to obtain a second video image.
8. The method according to claim 7, wherein before performing the rendering and merging process on the N-M video pictures according to the texture information and the position information of the N-M video pictures, further comprising:
and adjusting the size and/or the position of at least one video picture in the N-M video pictures according to an adjustment strategy, wherein the adjustment strategy comprises the size information and/or the position information of each video picture in the N-M video pictures.
9. A terminal device comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the terminal device to perform the method of any of claims 1-8.
10. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium resides to perform the method of any one of claims 1-8.
11. A computer program product containing executable instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1-8.
CN202111064643.9A 2021-09-09 2021-09-09 Method, device and storage medium for exiting multi-channel video shooting Active CN113923351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111064643.9A CN113923351B (en) 2021-09-09 2021-09-09 Method, device and storage medium for exiting multi-channel video shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111064643.9A CN113923351B (en) 2021-09-09 2021-09-09 Method, device and storage medium for exiting multi-channel video shooting

Publications (2)

Publication Number Publication Date
CN113923351A true CN113923351A (en) 2022-01-11
CN113923351B CN113923351B (en) 2022-09-27

Family

ID=79234570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111064643.9A Active CN113923351B (en) 2021-09-09 2021-09-09 Method, device and storage medium for exiting multi-channel video shooting

Country Status (1)

Country Link
CN (1) CN113923351B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201955A (en) * 2022-05-30 2023-12-08 荣耀终端有限公司 Video shooting method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108419016A (en) * 2018-04-17 2018-08-17 北京小米移动软件有限公司 Image pickup method, device and terminal
CN109729294A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 Video image acquisition methods, device, equipment and storage medium
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN111356000A (en) * 2018-08-17 2020-06-30 北京达佳互联信息技术有限公司 Video synthesis method, device, equipment and storage medium
US20200374386A1 (en) * 2017-11-23 2020-11-26 Huawei Technologies Co., Ltd. Photographing Method and Terminal
CN112788427A (en) * 2021-01-07 2021-05-11 北京电子科技职业学院 Device and method for playing video small window
CN113365012A (en) * 2020-03-06 2021-09-07 华为技术有限公司 Audio processing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200374386A1 (en) * 2017-11-23 2020-11-26 Huawei Technologies Co., Ltd. Photographing Method and Terminal
CN108419016A (en) * 2018-04-17 2018-08-17 北京小米移动软件有限公司 Image pickup method, device and terminal
CN111356000A (en) * 2018-08-17 2020-06-30 北京达佳互联信息技术有限公司 Video synthesis method, device, equipment and storage medium
CN109729294A (en) * 2019-01-15 2019-05-07 深圳市云歌人工智能技术有限公司 Video image acquisition methods, device, equipment and storage medium
CN110072070A (en) * 2019-03-18 2019-07-30 华为技术有限公司 A kind of multichannel kinescope method and equipment
CN113365012A (en) * 2020-03-06 2021-09-07 华为技术有限公司 Audio processing method and device
CN112788427A (en) * 2021-01-07 2021-05-11 北京电子科技职业学院 Device and method for playing video small window

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117201955A (en) * 2022-05-30 2023-12-08 荣耀终端有限公司 Video shooting method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113923351B (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN110072070B (en) Multi-channel video recording method, equipment and medium
CN113422903B (en) Shooting mode switching method, equipment and storage medium
CN113473005B (en) Shooting transfer live-action insertion method, equipment and storage medium
CN113475057B (en) Video frame rate control method and related device
WO2022262313A1 (en) Picture-in-picture-based image processing method, device, storage medium, and program product
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN113797530B (en) Image prediction method, electronic device and storage medium
CN113596321B (en) Method, device and storage medium for generating transition dynamic effect
CN114489533A (en) Screen projection method and device, electronic equipment and computer readable storage medium
CN113542613A (en) Device and method for photographing
CN114339429A (en) Audio and video playing control method, electronic equipment and storage medium
CN114257920B (en) Audio playing method and system and electronic equipment
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN113518189B (en) Shooting method, shooting system, electronic equipment and storage medium
CN113923351B (en) Method, device and storage medium for exiting multi-channel video shooting
CN113596320B (en) Video shooting variable speed recording method, device and storage medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN113965693B (en) Video shooting method, device and storage medium
CN115412678A (en) Exposure processing method and device and electronic equipment
CN114257737A (en) Camera shooting mode switching method and related equipment
CN115393676A (en) Gesture control optimization method and device, terminal and storage medium
CN112422814A (en) Shooting method and electronic equipment
CN113810595B (en) Encoding method, apparatus and storage medium for video shooting
CN114745508B (en) Shooting method, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant