CN115484394B - Guide use method of air separation gesture and electronic equipment - Google Patents

Guide use method of air separation gesture and electronic equipment Download PDF

Info

Publication number
CN115484394B
CN115484394B CN202111679527.8A CN202111679527A CN115484394B CN 115484394 B CN115484394 B CN 115484394B CN 202111679527 A CN202111679527 A CN 202111679527A CN 115484394 B CN115484394 B CN 115484394B
Authority
CN
China
Prior art keywords
gesture
electronic device
camera
user
popup window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111679527.8A
Other languages
Chinese (zh)
Other versions
CN115484394A (en
Inventor
黄雨菲
易婕
牛思月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN115484394A publication Critical patent/CN115484394A/en
Application granted granted Critical
Publication of CN115484394B publication Critical patent/CN115484394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a guide use method of a space gesture and electronic equipment, and relates to the technical field of terminals. According to the method, the mark of the air-separation gesture can be displayed when the user is identified to have the requirement of using the air-separation gesture, the user is timely reminded of using the air-separation gesture, and the efficiency of the user in the shooting process is improved. The method comprises the following steps: the electronic device displays a shooting preview interface. If the time difference between the first time and the current time is larger than a first preset time, responding to the fact that the electronic equipment is recording video, connecting the electronic equipment with a selfie stick, displaying a first popup window on a shooting preview interface by the electronic equipment, wherein the first popup window comprises gesture identification, the gesture identification is used for reminding a user to switch a shooting mode by using a space-apart gesture, and the first time is the time when the shooting mode is switched by the electronic equipment last time responding to the fact that the first camera detects the space-apart gesture.

Description

Guide use method of air separation gesture and electronic equipment
The application claims priority of China patent application filed by the national intellectual property office at 16 months of 2021, application number 202110676709.3 and entitled "a video creation method for users based on story line mode and electronic equipment", and priority of China patent application filed by the national intellectual property office at 11 months of 2021, application number 202111436311.9 and entitled "a guided use method of air-separation gesture and electronic equipment", which are all incorporated herein by reference.
Technical Field
The application relates to the technical field of terminals, in particular to a guide use method of a space gesture and electronic equipment.
Background
With the development of internet technology, application programs are becoming more and more functional. When the new function appears in the application program, a novice guide can be popped up on the interface of the application program, and the use method of the new function is displayed to the user through the novice guide, so that the user can know and master the use method of the new function more quickly.
However, even if the user looks at the novice guide, the user often forgets to use the new function.
Disclosure of Invention
The embodiment of the application provides a guide use method of a space-apart gesture and electronic equipment, which can display the mark of the space-apart gesture when recognizing that a user needs to use the space-apart gesture so as to remind the user of using the space-apart gesture.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a method for guiding and using a blank gesture, where the method is applied to an electronic device including a display screen, a first camera and a second camera, where the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time. If the time difference between the first time and the current time is larger than a first preset time, responding to the fact that the electronic equipment is recording video, connecting the electronic equipment with a selfie stick, displaying a first popup window on a shooting preview interface by the electronic equipment, wherein the first popup window comprises gesture identification, the gesture identification is used for reminding a user to switch a shooting mode by using a space-apart gesture, and the first time is the time when the shooting mode is switched by the electronic equipment last time responding to the fact that the first camera detects the space-apart gesture.
According to the guide use method of the blank gesture provided by the embodiment of the application, whether the time difference between the time (namely the first time) of using the blank gesture by a user and the current time is larger than the first preset time can be detected when the electronic equipment is in the front shooting mode, the rear shooting mode, the front and rear shooting modes and the picture-in-picture shooting mode, and when the time difference between the first time and the current time is larger than the first preset time, the requirement of using the blank gesture by the user is considered when the electronic equipment is recording a video and is connected with the self-timer rod is detected, so that the first popup window is displayed, and the user is reminded of using the blank gesture to switch the shooting mode by displaying the gesture mark on the guide popup window. The method can effectively remind the user of using a new function (such as a space gesture) and improve the efficiency of the user in the shooting process.
In one possible implementation, in response to detecting that the electronic device is recording video, the electronic device connects to the selfie stick, and the electronic device displays a first pop-up window in a shooting preview interface, including: in response to receiving a first operation of a user, the electronic device begins recording video; in response to detecting the connection of the selfie stick, the electronic device displays a first popup window in a shooting preview interface.
That is, when the time difference between the first time and the current time is greater than the first preset time, the electronic device may start recording the video first, and then display the first pop-up window when the selfie stick is connected.
In one possible implementation, in response to detecting that the electronic device is recording video, the electronic device connects to the selfie stick, and the electronic device displays a first pop-up window in a shooting preview interface, including: the electronic equipment is connected with the selfie stick; and in response to receiving the first operation of the user, the electronic equipment starts to record the video, and the electronic equipment displays a first popup window on a shooting preview interface.
That is, when the time difference between the first time and the current time is greater than the first preset time, the electronic device may be connected to the selfie stick first, and the user may control the electronic device to start recording video through the selfie stick, the blank gesture, and the control above the shooting preview, and at this time, the electronic device may also display the first popup window on the shooting preview interface.
In one possible implementation, the shooting preview interface includes a first control, and the method further includes: responding to the operation of the user on the first control, and displaying a second popup window on a shooting preview interface by the electronic equipment; and the electronic equipment circularly plays a plurality of guide videos in the second popup window according to a preset sequence, and each guide video in the plurality of guide videos is used for displaying a using method of the blank gesture.
Therefore, the electronic equipment not only can guide the user to use the blank gesture in the video recording process, but also can display the using method of the blank gesture through the second popup window, and the user is guided to use the blank gesture from multiple aspects.
In one possible implementation, the method further includes: responding to the operation of sliding the second popup window left by the user, displaying a last guide video adjacent to the first video being played on the second popup window by the electronic equipment, and stopping cyclic playing; or, in response to the user sliding the second popup window to the right, the electronic device displays the next guiding video adjacent to the first video on the second popup window and stops the cyclic playing.
That is, the electronic device may automatically and circularly play a plurality of guide videos on the second popup window, and the user may manually select the guide video that the user wants to see. Wherein, after the user manually selects the guidance video (i.e. sliding left or sliding right the second popup window), the electronic device does not play the guidance video in a recycling manner, but displays the guidance video selected by the user.
In one possible implementation manner, the second popup window includes a first display area and a second display area, the first display area is used for circularly playing a plurality of guiding videos, the second display area is used for circularly displaying a plurality of prompting messages corresponding to the guiding videos, the guiding videos are in one-to-one correspondence with the prompting messages, each prompting message is used for explaining the function and the using method of the space gesture displayed by the corresponding guiding video, and the guiding video being displayed by the first display area is corresponding to the prompting message being displayed by the second display area.
That is, not only the guiding video but also the prompt information (including text and icons) corresponding to the guiding video can be displayed on the second popup window, which is more beneficial for the user to understand the using method of the space gesture.
In one possible implementation, the second popup includes a confirmation option, the method further comprising: and responding to the operation of the user on the confirmation option, and closing the second popup window by the electronic equipment. That is, the second pop-up window may be manually closed by the user, and the electronic device may always display the second pop-up window if the user does not click on the confirmation option.
In one possible implementation, the method further includes: if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture to switch the shooting mode, the electronic equipment displays a second popup window on the shooting preview interface.
It can be seen that the first electronic device can confirm whether the user has learned the use of the over-air gesture by detecting whether the user has clicked the first control, and confirm whether the user has used the over-air gesture by detecting whether to switch the shooting mode in response to the first camera detecting the over-air gesture. When the first control is detected not to be clicked by the user and the shooting mode is not switched in response to the first camera detecting the blank gesture, the user is considered to not learn the use of the blank gesture and not use the blank gesture, and in this case, the electronic device can automatically pop up the second popup window to guide the user to learn to use the blank gesture. Furthermore, the electronic device may perform the detection when entering the multi-mirror video recording mode for the second time, and automatically pop up the second popup window when the above condition is satisfied.
In one possible implementation manner, the shooting preview interface further includes a guiding prompt, where the guiding prompt is set in a preset area of the first control, and the guiding prompt is used to instruct the user to click on the first control to view the guiding video. It will be appreciated that the electronic device may alert the user to learn to use the blank gesture by displaying a guidance prompt. Further, the electronic device may display the guidance prompt when entering the multi-mirror video mode for the first time.
In one possible implementation, the shooting preview interface further includes a second control, and the method further includes: in response to detecting the operation of the user on the second control, the electronic device displays a third popup window on the shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes; if the electronic equipment is recording the video, the time difference between the first time and the current time is larger than the first preset time, the electronic equipment displays a first popup window on a shooting preview interface, and the first popup window and a third popup window are not overlapped.
In one possible implementation, the gesture identification includes: the hand-held boxing glove comprises a first gesture identifier, a second gesture identifier and a third gesture identifier, wherein the first gesture identifier indicates a gesture moving to two sides, the second gesture identifier indicates a gesture of turning over a palm, and the third gesture identifier indicates a gesture of making a fist.
In a second aspect, an embodiment of the present application provides a method for guiding and using a blank gesture, where the method is applied to an electronic device including a display screen, a first camera and a second camera, where the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by a first camera in real time, and the second image is an image acquired by a second camera in real time; in response to receiving a first operation of a user, the electronic device begins recording video; if the time difference between the first time and the current time is larger than a first preset time, in response to detecting that the distance between the user shot by the first camera and the display screen is larger than or equal to a preset distance in a second preset time after video recording starts, the electronic equipment displays a first popup window on a shooting preview interface, the first popup window comprises gesture identification, and the first time is the time when the electronic equipment last responds to the first camera to detect a blank gesture and switches shooting modes.
That is, if the electronic device detects that the distance between the user photographed by the first camera and the display screen is greater than or equal to the preset distance within the second preset time after the video recording is started when the time difference between the first time and the current time is greater than the first preset time, the user is considered to be likely to hold the electronic device for photographing, so that the first popup window is displayed, and the user is reminded to switch the photographing mode by using the spaced gesture by displaying the gesture identifier on the guiding popup window. The method can effectively remind the user of using a new function (such as a space gesture) and improve the efficiency of the user in the shooting process.
In one possible implementation, the shooting preview interface includes a first control, and the method further includes: responding to the operation of the user on the first control, and displaying a second popup window on a shooting preview interface by the electronic equipment; and the electronic equipment circularly plays a plurality of guide videos in the second popup window according to a preset sequence, and each guide video in the plurality of guide videos is used for displaying a using method of the blank gesture.
Therefore, the electronic equipment not only can guide the user to use the blank gesture in the video recording process, but also can display the using method of the blank gesture through the second popup window, and the user is guided to use the blank gesture from multiple aspects.
In one possible implementation, the method further includes: responding to the operation of sliding the second popup window left by the user, displaying a last guide video adjacent to the first video being played on the second popup window by the electronic equipment, and stopping cyclic playing; or, in response to the user sliding the second popup window to the right, the electronic device displays the next guiding video adjacent to the first video on the second popup window and stops the cyclic playing.
That is, the electronic device may automatically and circularly play a plurality of guide videos on the second popup window, and the user may manually select the guide video that the user wants to see. Wherein, after the user manually selects the guidance video (i.e. sliding left or sliding right the second popup window), the electronic device does not play the guidance video in a recycling manner, but displays the guidance video selected by the user.
In one possible implementation manner, the second popup window includes a first display area and a second display area, the first display area is used for circularly playing a plurality of guiding videos, the second display area is used for circularly displaying a plurality of prompting messages corresponding to the guiding videos, the guiding videos are in one-to-one correspondence with the prompting messages, each prompting message is used for explaining the function and the using method of the space gesture displayed by the corresponding guiding video, and the guiding video being displayed by the first display area is corresponding to the prompting message being displayed by the second display area.
That is, not only the guiding video but also the prompt information (including text and icons) corresponding to the guiding video can be displayed on the second popup window, which is more beneficial for the user to understand the using method of the space gesture.
In one possible implementation, the second popup includes a confirmation option, the method further comprising: and responding to the operation of the user on the confirmation option, and closing the second popup window by the electronic equipment. That is, the second pop-up window may be manually closed by the user, and the electronic device may always display the second pop-up window if the user does not click on the confirmation option.
In one possible implementation, the method further includes: if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture to switch the shooting mode, the electronic equipment displays a second popup window on the shooting preview interface.
It can be seen that the first electronic device can confirm whether the user has learned the use of the over-air gesture by detecting whether the user has clicked the first control, and confirm whether the user has used the over-air gesture by detecting whether to switch the shooting mode in response to the first camera detecting the over-air gesture. When the first control is detected not to be clicked by the user and the shooting mode is not switched in response to the first camera detecting the blank gesture, the user is considered to not learn the use of the blank gesture and not use the blank gesture, and in this case, the electronic device can automatically pop up the second popup window to guide the user to learn to use the blank gesture. Furthermore, the electronic device may perform the detection when entering the multi-mirror video recording mode for the second time, and automatically pop up the second popup window when the above condition is satisfied.
In one possible implementation manner, the shooting preview interface further includes a guiding prompt, where the guiding prompt is set in a preset area of the first control, and the guiding prompt is used to instruct the user to click on the first control to view the guiding video. It will be appreciated that the electronic device may alert the user to learn to use the blank gesture by displaying a guidance prompt. Further, the electronic device may display the guidance prompt when entering the multi-mirror video mode for the first time.
In one possible implementation, the shooting preview interface further includes a second control, and the method further includes: in response to detecting the operation of the user on the second control, the electronic device displays a third popup window on the shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes; if the electronic equipment is recording the video, the time difference between the first time and the current time is larger than the first preset time, the electronic equipment displays a first popup window on a shooting preview interface, and the first popup window and a third popup window are not overlapped.
In one possible implementation, the gesture identification includes: the hand-held boxing glove comprises a first gesture identifier, a second gesture identifier and a third gesture identifier, wherein the first gesture identifier indicates a gesture moving to two sides, the second gesture identifier indicates a gesture of turning over a palm, and the third gesture identifier indicates a gesture of making a fist.
In a third aspect, an embodiment of the present application provides a method for guiding and using a blank gesture, where the method is applied to an electronic device including a display screen, a first camera and a second camera, where the first camera and the second camera are located on different sides of the display screen, and the first camera and the display screen are located on the same side of the electronic device, and the method includes: the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by a first camera in real time, and the second image is an image acquired by a second camera in real time; in response to receiving a first operation of a user, the electronic device begins recording video; if the time difference between the first time and the current time is larger than a first preset time, responding to the fact that the first image duty ratio is reduced in a second preset time after video recording is started, displaying a first popup window on a shooting preview interface by the electronic equipment, wherein the first popup window comprises gesture identification, the first time is the time when the shooting mode of the electronic equipment is switched last time in response to the fact that the first camera detects a blank gesture, and the first image duty ratio is the area duty ratio of a face area in the first image.
That is, if the electronic device detects that the area of the face area in the first image is reduced within the second preset time after the video recording is started when the time difference between the first time and the current time is greater than the first preset time, the user is considered to possibly hold the electronic device for shooting, so that the first popup window is displayed, and the user is reminded to switch the shooting mode by using the spaced gesture by displaying the gesture mark on the guiding popup window. The method can effectively remind the user of using a new function (such as a space gesture) and improve the efficiency of the user in the shooting process.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a display screen, a first camera, a second camera, and a processor, where the processor is coupled to a memory, and the memory stores program instructions, where the program instructions stored in the memory, when executed by the processor, cause the electronic device to implement the method described in the first aspect, the second aspect, or the third aspect, and any one of the possible design manners thereof.
In a fifth aspect, a computer readable storage medium includes computer instructions; when executed on an electronic device, the computer instructions cause the electronic device to perform the method of the first, second or third aspect and any one of its possible designs.
In a sixth aspect, the present application provides a chip system comprising one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a wire.
The chip system described above may be applied to an electronic device including a communication module and a memory. The interface circuit is for receiving signals from a memory of the electronic device and transmitting the received signals to the processor, the signals including computer instructions stored in the memory. When executed by a processor, the electronic device may perform the method as described in the first, second or third aspect and any one of its possible designs.
In a seventh aspect, the present application provides a computer program product which, when run on a computer, causes the computer to carry out the method according to the first, second or third aspect and any one of its possible designs.
Drawings
Fig. 1A is a schematic hardware structure of an electronic device according to an embodiment of the present application;
fig. 1B is a software architecture diagram of an electronic device according to an embodiment of the present application;
FIGS. 2A-2B are a set of interface diagrams according to an embodiment of the present application;
FIGS. 3A-3B are a set of interface diagrams according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a set of interfaces provided in an embodiment of the present application;
FIGS. 5A-5C are a set of interface diagrams according to an embodiment of the present application;
FIGS. 6A-6F are a set of interface diagrams according to an embodiment of the present application;
FIGS. 7A-7D are a set of interface diagrams according to an embodiment of the present application;
FIGS. 8A-8C are a set of interface diagrams according to an embodiment of the present application;
FIGS. 9A-9C are a set of interface diagrams according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a set of interfaces provided in an embodiment of the present application;
FIGS. 11A-11B are a set of interface diagrams according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a set of interfaces provided in an embodiment of the present application;
FIGS. 13A-13B are a set of interface diagrams according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a chip system according to an embodiment of the present application.
Detailed Description
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
For clarity and conciseness in the description of the embodiments below and for ease of understanding to those skilled in the art, a brief introduction to related concepts or technologies is first presented.
The shooting preview interface refers to an interface displayed before or during shooting of the electronic equipment, and can be used for displaying images acquired by the camera in real time. In addition, the shooting preview interface may further display a plurality of controls, and the plurality of controls may include a flash control for turning on/off a flash, a beauty control for turning on/off a beauty function, a shutter control for shooting, and the like.
Single-lens shooting refers to a mode in which an electronic device shoots only through one camera. In the single-lens shooting mode, the electronic equipment only displays an image shot by one camera in a shooting preview interface. Wherein, the single-lens shooting can comprise a front shooting mode, a rear shooting mode and the like.
Specifically, front-end shooting refers to a mode in which electronic equipment shoots through a front-end camera. When the electronic equipment is in the front shooting mode, the image acquired by the front camera in real time can be displayed on a shooting preview interface.
The rear shooting mode refers to a mode that the electronic equipment shoots through a rear camera. When the electronic equipment is in the rear shooting mode, the image acquired by the rear camera in real time can be displayed on a shooting preview interface.
Multi-lens shooting refers to a mode that electronic equipment can shoot through a plurality of cameras. In the multi-mirror shooting mode, the display screen simultaneously displays images shot by the cameras in the shooting preview interface, and the images shot by the different cameras can be spliced or displayed in a picture-in-picture mode. The multi-lens shooting may include a front-back shooting mode, a back-back shooting mode, a picture-in-picture shooting mode, and the like, according to the type of the camera used by the electronic device and the display modes of the images shot by different cameras. In the embodiment of the application, the multi-mirror shooting can also be called multi-mirror video recording.
The front and back shooting mode refers to a mode that the electronic equipment can shoot through the front camera and the back camera at the same time. When the electronic device is in the front-back shooting mode, images (for example, a first image and a second image) shot by the front camera and the rear camera can be displayed in the shooting preview interface at the same time, and the first image and the second image are spliced and displayed. When the electronic equipment is vertically arranged, the first image and the second image can be spliced up and down; when the electronic equipment is horizontally arranged, the first image and the second image can be spliced left and right. By default, the display area of the first image is identical to the display area of the second image.
The rear shooting mode refers to a mode that the electronic device can shoot through two rear cameras (if a plurality of rear cameras exist) at the same time. When the electronic device is in the rear shooting mode, the electronic device can simultaneously display images (for example, a first image and a second image) shot by the two rear cameras in a shooting preview interface, and the first image and the second image are spliced and displayed. When the electronic equipment is vertically arranged, the first image and the second image can be spliced up and down; when the electronic equipment is horizontally arranged, the first image and the second image can be spliced left and right.
The picture-in-picture shooting mode refers to a mode in which the electronic device can shoot through two cameras at the same time. When the electronic device is in the picture-in-picture shooting mode, images (e.g., a first image, a second image) shot by two cameras can be displayed simultaneously in a shooting preview interface. The second image is displayed in the whole area of the shooting preview interface, the first image is overlapped on the second image, and the display area of the first image is smaller than that of the second image. By default, the first image may be located below and to the left of the second image. The two cameras can be freely combined, for example, two front cameras, two rear cameras or one front camera and one rear camera.
It should be noted that, the above-mentioned "single-lens shooting", "multi-lens shooting", "front shooting mode", "rear shooting mode", "front-rear shooting mode", "picture-in-picture shooting mode", and "rear shooting mode" are just some names used in the embodiments of the present application, and the meaning of the representative names are already described in the embodiments of the present application, and the names do not limit the embodiments.
The application provides a guiding and using method of a space-apart gesture, which is applied to electronic equipment comprising a first camera and a second camera of a display screen, wherein the first camera and the second camera are positioned on different sides of the display screen, and the first camera and the display screen are positioned on the same side of the electronic equipment. The electronic equipment can display a shooting preview interface, and the preview interface comprises at least one of a first image and a second image, wherein the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time. If the time difference between the first time and the current time is greater than a first preset time, the electronic device detects that a user needs to use the space gesture, and pops up a guide popup window (which can be called a first popup window) of the space gesture, where the guide popup window includes a gesture identifier and is used for displaying a use method of the space gesture. The first time is the time when the electronic device last responds to the first camera to detect the blank gesture and switch the shooting mode (also can be understood as the time when the user last uses the blank gesture). By the method provided by the application, the user can be timely reminded to switch the shooting mode of the electronic equipment by using the blank gesture, and the man-machine interaction efficiency and the user experience can be effectively improved.
The electronic device may be an electronic device comprising a display screen or a folding screen device. When the electronic equipment is a single-screen mobile phone, the first camera and the second camera can be a front camera and a rear camera of the electronic equipment respectively. When the electronic equipment is a folding screen mobile phone, the display screen can comprise a first screen and a second screen, the first screen is rotationally connected with the second screen, the first camera and the second camera can be respectively positioned at two sides of the first screen, and the first camera and the first screen are positioned at the same side of the electronic equipment. Or, the first camera is located on the first screen, the second camera is located on the second screen, and the first camera, the second camera and the second camera are located on the same side of the electronic device. In the embodiment of the application, an electronic device is taken as an electronic device including a display screen as an example.
There are various scenarios in which a user has a need to use a space gesture, which will be described in detail below in conjunction with the accompanying drawings. In addition, the above-mentioned space gesture is just a name used in the embodiment of the present application, and may also be referred to as a hover gesture, etc., specifically, a gesture input without touching the electronic device, and the meaning of the representation of the gesture is already described in the embodiment of the present application, and the name of the gesture is not limited to the embodiment.
In order to more clearly and specifically describe the photographing method provided by the embodiment of the present application, the following first describes an electronic device related to implementing the method.
The electronic device may be a cell phone, tablet, desktop, laptop, handheld, notebook, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, as well as a cellular telephone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) device, virtual Reality (VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, vehicle-mounted device, smart home device, and/or smart city device, with embodiments of the application not being particularly limited as to the particular type of electronic device.
Referring to fig. 1A, fig. 1A shows a schematic hardware structure of an electronic device according to an embodiment of the present application.
As shown in fig. 1A, the electronic device 200 may include a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charge management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, keys 290, a motor 291, an indicator 292, a plurality of cameras 293, a display 294, and a subscriber identity module (subscriber identification module, SIM) card interface 295.
The sensor module 280 may include pressure sensors, gyroscope sensors, barometric pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, bone conduction sensors, and the like.
It is to be understood that the structure illustrated in this embodiment does not constitute a specific limitation on the electronic apparatus 200. In other embodiments, the electronic device 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 200. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to reuse the instruction or data, it may be called directly from the memory. Repeated accesses are avoided and the latency of the processor 210 is reduced, thereby improving the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others. It should be understood that the connection relationship between the modules illustrated in this embodiment is only illustrative, and does not limit the structure of the electronic device 200. In other embodiments, the electronic device 200 may also employ different interfaces in the above embodiments, or a combination of interfaces.
In the embodiment of the present application, the processor 210 may receive a plurality of continuous images corresponding to a space gesture, such as a "palm", input by a user and captured by the camera 293, then the processor 210 may perform a comparative analysis on the plurality of continuous images, determine that the space gesture corresponding to the plurality of continuous images is a "palm", determine that an operation corresponding to the space gesture, such as starting recording, and then the processor 210 may control the camera application program to perform a corresponding operation. The corresponding operations may include, for example: the multiple cameras are moved to shoot images simultaneously, then the images shot by the multiple cameras are synthesized through a GPU (graphics processing unit) in a mode of splicing or picture-in-picture (local superposition) and the like, and the display screen 294 is called to display the synthesized images in a shooting preview interface of the electronic equipment.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 200. The external memory card communicates with the processor 210 through an external memory interface 220 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
Internal memory 221 may be used to store computer executable program code that includes instructions. The processor 210 executes various functional applications of the electronic device 200 and data processing by executing instructions stored in the internal memory 221. For example, in an embodiment of the present application, the processor 210 may include a memory program area and a memory data area by executing instructions stored in the internal memory 221. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 200 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
In an embodiment of the present application, the internal memory 221 may store a picture file or a recorded video file or the like that is photographed by the electronic device in different photographing modes.
The charge management module 240 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The charging management module 240 may also supply power to the terminal device through the power management module 241 while charging the battery 242.
The power management module 241 is used for connecting the battery 242, and the charge management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charge management module 240 and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, the wireless communication module 260, and the like. In some embodiments, the power management module 241 and the charge management module 240 may also be provided in the same device.
The wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like. In some embodiments, antenna 1 and mobile communication module 250 of electronic device 200 are coupled, and antenna 2 and wireless communication module 260 are coupled, such that electronic device 200 may communicate with a network and other devices via wireless communication techniques.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 200. The mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 250 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation.
The mobile communication module 250 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be provided in the same device as at least some of the modules of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication including WLAN (e.g., (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on the electronic device 200.
The wireless communication module 260 may be one or more devices that integrate at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
The electronic device 200 implements display functions through a GPU, a display screen 294, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
The display 294 is used to display images, videos, and the like. The display 294 includes a display panel.
The electronic device 200 may implement a photographing function through an ISP, a camera 293, a video codec, a GPU, a display 294, an application processor, and the like. The ISP is used to process the data fed back by the camera 293. The camera 293 is used to capture still images or video. In some embodiments, the electronic device 200 may include N cameras 293, N being a positive integer greater than 2.
The electronic device 200 may implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an ear-headphone interface 270D, an application processor, and the like. Such as music playing, recording, etc.
Keys 290 include a power on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be a touch key. The motor 291 may generate a vibration alert. The motor 291 may be used for incoming call vibration alerting or for touch vibration feedback. The indicator 292 may be an indicator light, which may be used to indicate a state of charge, a change in power, a message indicating a missed call, a notification, etc.
The plurality of cameras 293 are used to capture images. In the embodiment of the application, the number of cameras 293 can be M, M is more than or equal to 2, and M is a positive integer. The number of cameras started in multi-lens shooting of the electronic equipment can be N, N is more than or equal to 2 and less than or equal to M, and N is a positive integer.
In the embodiment of the present application, the type of the camera 293 may be differentiated according to the hardware configuration and the physical location. For example, the plurality of cameras included in the camera 293 may be disposed on the front and back sides of the electronic device, the camera disposed on the display screen 294 of the electronic device may be referred to as a front camera, and the camera disposed on the rear cover of the electronic device may be referred to as a rear camera; for example, cameras having different focal lengths and different viewing angles, which are included in the camera 293, may be referred to as wide-angle cameras, and cameras having a long focal length and a small viewing angle may be referred to as normal cameras. The content of the images shot by different cameras is different in that: the front camera is used for shooting scenes facing the front surface of the electronic equipment, and the rear camera is used for shooting scenes facing the back surface of the electronic equipment; the wide-angle camera can shoot scenes with larger area in a shorter shooting distance range, and the scenes shot at the same shooting distance are smaller than the images of the scenes shot by using the common lens in the picture. The focal length and the visual angle are relative concepts, and are not limited by specific parameters, so that the wide-angle camera and the common camera are also relative concepts, and can be distinguished according to physical parameters such as the focal length, the visual angle and the like.
Particularly, in the embodiment of the present application, the camera 293 at least includes a camera with a time of flight (TOF) 3D sensing module or a structured light (structured light) 3D sensing module, and the camera acquires 3D data of an object in a captured image, so that the processor 210 can identify an operation instruction corresponding to a space gesture of a user according to the 3D data of the object.
The camera for acquiring the 3D data of the object can be an independent low-power-consumption camera, and can also be other common front-mounted cameras or rear-mounted cameras, wherein the common front-mounted cameras or rear-mounted cameras support a low-power-consumption mode, when the low-power-consumption camera works or the common front-mounted cameras or rear-mounted cameras work in the low-power-consumption mode, the frame rate of the camera is lower than that of the common camera working in a non-low-power-consumption mode, and the output image is in a black-and-white format. A typical camera may output 30 frames of images, 60 frames of images, 90 frames of images, 240 frames of images for 1 second, but the low power consumption camera may output, for example, 2.5 frames of images for 1 second when the typical front camera or the rear camera operates in a low power consumption mode, and may switch to output 10 frames of images for 1 second when the camera captures the first image representing the same space gesture, so as to accurately recognize an operation instruction corresponding to the space gesture through continuous multiple image recognition. Meanwhile, compared with the common camera, the power consumption is reduced when the camera works in a low power consumption mode.
The display 294 is used to display images, videos, and the like. In some embodiments, the electronic device may include 1 or N displays 294, N being a positive integer greater than 1. In an embodiment of the present application, the display 294 may be used to display images taken by any one or more cameras 293, for example, to display multiple frames of images taken by one camera in a capture preview interface, or to display multiple frames of images from one camera 293 in a saved video file, or to display a photograph from one camera 293 in a saved picture file.
The SIM card interface 295 is for interfacing with a SIM card. The SIM card may be inserted into the SIM card interface 295 or removed from the SIM card interface 295 to enable contact and separation from the electronic device 200. The electronic device 200 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 295 may support Nano SIM cards, micro SIM cards, and the like.
Fig. 1B is a software structural block diagram of an electronic device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 1B, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 1B, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the electronic device in software and hardware during shooting is illustrated in connection with capturing a photo scene.
When the touch sensor receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into the original input event (including information such as touch coordinates, time stamp of touch operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies a control corresponding to the input event. Taking the touch operation as a touch click operation, taking a control corresponding to the click operation as an example of a control of a camera application icon, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera driver by calling a kernel layer, and captures a still image or video through a camera 293. In the embodiment of the present application, the touch operation received by the touch sensor may be replaced by an operation in which the camera 293 captures a blank gesture input by the user. Specifically, when the camera 293 shoots the operation of the air-stop gesture, the corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the space gesture operation into the original input event (including the image of the space gesture, the timestamp of the space gesture operation, etc.). The original input event is stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, and identifies an operation corresponding to the input event. Taking the operation of switching the shooting mode as an example, the camera application calls the interface of the application framework layer, and further starts other camera drivers by calling the kernel layer, so that the camera application switches to the other cameras 293 to capture still images or videos.
Next, the specific content of the guided use method of the blank gesture provided by the application will be described with reference to the accompanying drawings.
The electronic equipment can pop up the guide popup window of the blank gesture when entering the multi-mirror video for the first time, and display the using method of the blank gesture to a user. The electronic device can also pop up a guide popup window of the blank gesture when the time difference between the last time the blank gesture is used by the user and the current time is larger than the first preset time and the user is detected to have the requirement of using the blank gesture, and display a using method of the blank gesture to the user. The following will describe the process of ejecting the guided missile window by the electronic device in the two scenarios respectively with reference to the accompanying drawings.
Referring to fig. 2A-2B, a process of the electronic device first entering a multi-mirror video is shown.
As shown in fig. 2A, the handset may display a main interface 301. The main interface 301 may include an icon 302 of a camera application. The handset may receive an operation of clicking on the icon 302 by the user, and in response to the operation, as shown in fig. 2B, the handset may open the camera application and display a shooting preview interface 303 of the camera application. It can be understood that the camera application is an application for capturing an image on an electronic device such as a smart phone, a tablet computer, etc., and may be a system application or a third party application, and the name of the application is not limited by the present application. That is, the user may click on the icon 302 of the camera application to open the photographing preview interface 303 of the camera application. Without being limited thereto, the user may also invoke the camera application to open the shooting preview interface 303 in other applications, such as the user clicking on a shooting control in a social class application to open the shooting preview interface 303. The social application can support the user to share the shot pictures or videos with other people.
It should be noted that, the shooting preview interface 303 may be a user interface of a default shooting mode of the camera application, for example, may be a user interface provided when the camera application is in a front shooting mode. It will be appreciated that the default shooting mode may be other, such as a rear shooting mode, a front-rear shooting mode, and the like. Alternatively, the camera application may have a memory function, and the shooting preview interface 303 may be a user interface of a shooting mode in which the camera application is in when the camera application exits last time.
Fig. 2B illustrates an example of the shooting preview interface 303 as a shooting preview interface corresponding to when the camera application is in the front shooting mode. As shown in fig. 2B, the shot preview interface 303 may include a preview image 304 shot mode option 305, a flash control, a shutter control, and the like. The preview image 304 is an image captured by a front camera among the plurality of cameras 293. It should be noted that, the electronic device may refresh the image displayed in the shooting preview interface 303 (i.e., the preview image 304) in real time, so that the user previews the image currently shot by the camera 293. The shooting mode option 305 is used to provide a plurality of shooting modes for selection by the user. The plurality of photographing modes may include: photograph 305a, video 305b, multi-mirror video 305c, real-time blurring, panoramic, etc. The electronic device may receive an operation of the user's left/right slide the photographing mode option 305, and in response to the operation, the electronic device may turn on the photographing mode selected by the user. Note that, not limited to the one shown in fig. 2B, more or fewer options than those shown in fig. 2B may be displayed in the shooting mode option 305.
The photographing mode corresponding to the photographing 305a, i.e. the commonly used single-lens photographing, may include a front photographing mode, a rear photographing mode, and the like. That is, when the photograph 305a is selected, the electronic device may take a photograph through the front camera or the rear camera. For specific description of the front shooting mode and the rear shooting mode, please refer to the foregoing, and the description is omitted herein.
The shooting modes corresponding to the multi-mirror video 305c may include multi-mirror shooting and single-mirror shooting. That is, when the multi-lens video 305c is selected, the electronic apparatus can perform single-lens shooting through one camera or multi-lens shooting through a plurality of cameras. For a specific description of the multi-shot image, reference may be made to the foregoing detailed description, which is not repeated herein.
As shown in fig. 2B, the photo 305a is in a selected state. That is, the electronic device is currently in a photographing mode. If the user wishes to turn on the multi-mirror recording mode, he/she can slide left through the capture mode option 305 and select the multi-mirror recording 305c. When detecting the user's left-hand slide of the photographing mode option 305 and selecting the operation of the multi-mirror video 305c, the electronic device may turn on the multi-mirror video mode and display a photographing preview interface 401 as shown in fig. 3A.
As shown in fig. 3A, after entering the multi-mirror video recording mode, the electronic device may display a shooting preview interface 401. The shooting preview interface 401 includes an image 401a (which may also be referred to as a second image), an image 401b (which may also be referred to as a first image), a blank mirror change control 402, a teaching guide control 403, a prompt message 404, a setting control 405, and a shooting mode switching control 406. The image 401a is an image acquired by a rear camera in real time, the image 401b is an image acquired by a front camera in real time, and the image 401a and the image 401b are spliced up and down due to the fact that the electronic equipment is arranged vertically. Specifically, the first image is located in a first area of the shooting preview interface, the second image is located in a second area of the shooting preview interface, and the first area and the second area are not overlapped. Illustratively, the first region may be the region of image 401b in fig. 3A, and the second region may be the region of image 401a in fig. 3A.
In the application, when the electronic device enters the multi-mirror video recording mode for the first time, the front camera and the rear camera are started by default, and the display mode of the image is a splicing mode by default. However, in other embodiments, the default on camera may be two rear cameras, a front camera, or a rear camera. The display method of the image is not limited to the stitching method, and may be a picture-in-picture method or the like, and is not particularly limited herein. Or, the camera application may have a memory function, and after the electronic device enters the multi-mirror video mode, the electronic device may start the camera that is working when the camera application exits from the multi-mirror video mode last time, and display the camera in the last display mode.
Wherein, the blank mirror changing control 402 can be used for a user to quickly start/stop the blank mirror changing function. After the function of changing the mirror at intervals is started, a user can control the electronic equipment through the intervals. It should be noted that, after the electronic device enters the multi-mirror video recording mode, the function of changing the mirror with a space can be started by default. Thus, the blank mirror control 402 is in an on state (which may also be referred to as a first state) for indicating that the blank mirror function has been turned on. Of course, the space-saving mirror changing function can also be turned off. For example, in response to detecting a touch operation by a user on the blank mirror control 402, the electronic device may turn off the blank mirror function. At this point, the blank mirror control 402 is in an off state (also referred to as a second state) for indicating that the blank mirror function has been turned off.
The teaching guidance control 403 (also referred to as a first control) may be used to guide a user to learn a blank gesture of blank changing a mirror, such as a blank gesture to open a recording, a gesture to switch a double mirror and a single mirror, a gesture to open/close a picture-in-picture, a gesture to exchange a front and rear lens, a gesture to end a recording, and the like. Note that, the teaching guidance control 403 is associated with a space-apart mirror change function: when the blank mirror change function is turned on (or the blank mirror change control 402 is in an on state), the shooting preview interface 401 may display the teaching guide control 403; when the blank mirror function is turned off (or the blank mirror control 402 is in the off state), the shooting preview interface 401 may hide (and may be understood as not displaying) the teaching guide control 403. It should be noted that, when the electronic device starts recording, the teaching guide control 403 may also be hidden.
The prompt information 404 (also referred to as a guidance prompt) is disposed at a preset position of the teaching guidance control 403 (for example, disposed on the left side of the teaching guidance control 403 and pointing to the teaching guidance control 403), and is used for reminding the user to click the teaching guidance control 403 to view the teaching of the alternate mirror gesture. For example, the prompt 404 may be a "click viewable" alternate mirror "gesture.
The setting control 405 may be used to adjust parameters of the photographed picture (e.g., resolution, picture scale, etc.), turn on or off some of the ways to photograph (e.g., alternate mirror, time photographing, smile snapshot, voice-controlled photographing, etc.), etc. Specifically, the electronic device may receive an operation in which the user clicks the setting control 405, and in response to the operation, the electronic device may display a setting interface 405a as illustrated in fig. 3B. The setup interface 405a may include options such as alternate mirror 405b, photo scale, voice-controlled photographing, etc. The alternate mirror 405b may turn on or off the alternate mirror function. Note that, the on-off state of the blank mirror 405b in the setting interface 405a is linked with the on-off state of the blank mirror control 402 in the shooting preview interface 401, and the on-off state of the blank mirror 405b is consistent with the on-off state of the blank mirror control 402. That is, if the blank mirror 405b is switched from the on/off state to the off/on state, the blank mirror control 402 is also switched from the on/off state to the off/on state, and vice versa.
The shooting mode switch control 406 may be used to provide a variety of shooting modes for selection by a user, such as a front-back shooting mode, a back-back shooting mode, a picture-in-picture shooting mode, a front shooting mode, a back shooting mode, and so on.
It should be noted that the process of re-scheduling the multi-mirror video from the electronic device may be consistent with the process shown in fig. 2A-2B. The difference is that after the electronic device has entered the video recording, the prompt message 404 is no longer displayed on the shooting preview interface.
If the user wishes to view the instructional guide for the blank gesture, the instructional guide control 403 may be clicked. As shown in fig. 4, the electronic device may receive an operation by which the user clicks on the tutorial guide control 403, in response to which the electronic device may display a guide pop 407 (also referred to as a second pop) as shown in fig. 5A. The guided missile window 407 can be used to show the manner of use of the blank gesture and related ancillary interpretation. Specifically, the guided pop-up window 407 may include an animation display region 407a (may also be referred to as a first display region), a text prompt region 407b (may also be referred to as a second display region), and a confirmation option 407c. The animation display region 407a is used for displaying a plurality of guiding videos so as to display the using method and effect of the air-stop gesture. In one design, the active display 407a may be presented in an animated manner that is more vivid and understandable, which helps the user to quickly understand and learn the space-apart gestures. The content displayed in the text prompt area 407b matches the content displayed in the active display area 407a, and can be used for assisting in explaining the function of the blank gesture being displayed in the active display area 407a and the using method of the blank gesture.
As shown in fig. 5A to 5C, the animation display region 407a may display a guide animation recorded with a gap open.
Specifically, as shown in fig. 5A, in the animation display region 407a, the electronic device is in the front-rear shooting mode. The electronic device may display an image 409a captured by the rear camera and an image 409b captured by the front camera on the capture preview interface 408. The electronic device may also display a blank mirror icon 410 upon recognition of the "hand up" gesture. As shown in fig. 5B, when the electronic device continuously detects the gesture of holding the "lift hand", the time progress bar of the spaced mirror icon 410 may be gradually filled, and after a first preset time (for example, 2 seconds), the photographing preview interface 408 shown in fig. 5C may be displayed. Compared to the shooting preview interface 408 shown in fig. 5A, the shooting preview interface 408 shown in fig. 5C is a shooting preview interface after the electronic device is turned on to a recording state.
Correspondingly, the text prompting area 407b may display "open record at intervals", and wait 2 seconds after the user lifts the icon 410 (a specific icon is shown in the figure) for changing the mirror at intervals, so as to assist the user in understanding how to open record using the interval gesture.
As shown in fig. 6A to 6F, the animation display region 407a may also display a guide animation that switches the double mirror and the single mirror at intervals.
Fig. 6A to 6C show a guide animation in which the electronic device is switched from a double mirror to a single mirror.
As shown in fig. 6A, in the animation display region 407a, the electronic device is in the front-rear shooting mode. The electronic device may display an image 409a captured by the rear camera and an image 409b captured by the front camera on the capture preview interface 408. Where image 409a is located to the left of the capture preview interface 408 and image 409b is located to the right of the capture preview interface 408. As also shown in fig. 6A, the electronic device may display a blank mirror icon 410 upon recognition of a "hand up" gesture. As shown in fig. 6B, after the electronic device detects the gesture of "palm sliding from right to left", the front-back shooting mode may be switched to the front-side shooting mode, and a shooting preview interface 408 shown in fig. 6C is displayed, where the shooting preview interface 408 includes an image 409B. During the switching process, the image 409a may be correspondingly moved from right to left and gradually reduced in display area according to the gesture from right to left, and finally disappear from the shooting preview interface 408, while the image 409b correspondingly moved from right to left and gradually increased in display area, and finally fills the whole shooting preview interface 408.
Fig. 6D-6F show a boot animation of the electronic device switching from a single mirror to a double mirror.
As shown in fig. 6D, the electronic device, while in the front-facing shooting mode, may display an image 409b shot by the front-facing camera in the shooting preview interface 408. In addition, the electronic device may also display a blank mirror icon 410 after recognizing the "hand up" gesture. As shown in fig. 6E, the electronic device may switch the photographing mode from the front photographing mode to the front-rear photographing mode after detecting the gesture of "palm slide from left to right", and display the photographing preview interface 408 as shown in fig. 6F. In fig. 6F, the electronic device displays both the image 409a captured by the rear camera and the image 409b captured by the front camera. The display forms of the image 409a and the image 408b are in a form of left-right stitching, and the image 409a may be displayed on the left side of the shooting preview interface 408 according to the gradually increasing display area corresponding to the gesture of the palm sliding from left to right, and the image 408b may be displayed on the right side of the shooting preview interface 408 according to the gradually decreasing display area corresponding to the movement from left to right.
Correspondingly, in fig. 6A-6F, the text prompting area 407b may display "switch between double and single mirrors at intervals", and when the screen is shifted horizontally by the hand after the screen is shifted by the interval of the user's hand by the screen switching icon 410, the left screen is pushed by the left slide and the right screen is pushed by the right slide. (when the screen is erected, the screen is pushed away by left sliding, the screen is pushed away by right sliding) ", and the screen is used for assisting a user in understanding how to switch between a single mirror and a double mirror by using a space gesture.
As shown in fig. 7A to 7C, the animation display region 407A may also display a guide animation that turns on/off the pip at intervals.
As shown in fig. 7A, the electronic device is in the rear-end shooting mode, and an image 409a shot by the rear camera is displayed in the shooting preview interface 408. As also shown in fig. 7A, the electronic device may display a blank mirror icon 410 upon recognition of a "hand up" gesture. As shown in fig. 7B, the electronic device may, after recognizing the gesture of "lift hand and make a fist," turn on the picture-in-picture mode and display a shoot preview interface 408 as shown in fig. 7C. The photographing preview interface 408 shown in fig. 7C displays an image 409a and an image 409b at the same time. Wherein image 409a is still displayed in the entire display area of the photographing preview interface 408, and image 409b is superimposed on this image 409a.
In addition, the animation display region 407A may also display the inverse of fig. 7A-7C. That is, the electronic device may display the photographing preview interface 408 shown in fig. 7B after detecting a gesture of "lift hand" in the photographing preview interface 408 shown in fig. 7C. Then, after a gesture of "lift hand and make a fist" is detected in the shooting preview interface 408 shown in fig. 7B, the shooting preview interface 408 shown in fig. 7A is displayed. In other words, the user can turn on or off the picture-in-picture by this "lift hand and make a fist" gesture.
In fig. 7A-7C, the text prompting area 407b may display "open/close the pip at intervals" to assist the user in understanding how to open/close the pip using the spaced gesture by holding a fist after the spaced gesture icon 408 appears.
In one possible design, before the electronic device displays the shooting preview interface 408 shown in fig. 7A, the electronic device may also display the shooting preview interface 408 shown in fig. 7D, where the shooting preview interface 408 is an interface displayed by the electronic device before the gesture of "lift hand" is recognized. The capture preview interface 408 may display a prompt 411, where the prompt 411 is used to prompt the user, and the electronic device may use the front camera to recognize the blank gesture when in the rear capture mode, so as to avoid the user from feeling that the privacy is violated. In the embodiment of the present application, the prompt 411 may be an icon. In other embodiments, the prompt 411 may be in the form of text or a combination of text and icons. It should be noted that, instead of combining the animation of switching the double mirror and the single mirror at intervals, the scene shown in fig. 7D may be displayed as a single guide animation in the animation display region 407 a.
As shown in fig. 8A to 8C, the animation display region 407a may also display a guide animation of the lens before and after the gap switch.
As shown in fig. 8A, the electronic device is in the rear-end shooting mode, and an image 409a shot by the rear camera is displayed in the shooting preview interface 408. The electronic device may display the blank mirror icon 410 after a "hand up" gesture is recognized. As shown in fig. 8B, the electronic device may switch the photographing mode from the rear photographing mode to the front photographing mode after recognizing the gesture of "flip from palm to back of hand", and display the photographing preview interface 408 as shown in fig. 7C. The shooting preview interface 408 displays an image 409b shot by the front camera.
In fig. 8A-8C, the text prompting area 407b may display "before and after the shot is switched at intervals", and after the hand is lifted to have the gesture icon 408 at intervals, the hand is turned from the palm to the back of the hand "to assist the user in understanding how to use the shot before and after the shot is switched at intervals.
In the embodiment of the present application, before the electronic device displays the shooting preview interface 408 shown in fig. 8A, the electronic device may also display the shooting preview interface 408 shown in fig. 7D, and the specific content thereof is referred to the foregoing and is not described herein.
As shown in fig. 9A to 9C, the animation display region 407a may also display a guide animation of the end of the gap recording.
As shown in fig. 9A, the electronic device is in the front-back shooting mode, and an image 409A shot by the rear camera and an image 409b shot by the front camera are displayed in the shooting preview interface 408, and the image 409A is stitched with the image 409b from side to side. The electronic device may display a blank mirror icon 410 upon recognition of a hand-lifting gesture. As shown in fig. 9B, in the scenario shown in fig. 9A, the electronic device may display a shooting preview interface 408 shown in fig. 9C after recognizing the gesture of "OK". Compared to the shooting preview interface 408 shown in fig. 9A, the shooting preview interface 408 shown in fig. 9C is a shooting preview interface after the electronic device ends the recording state.
In fig. 9A-9C, the text prompting area 407b may display "end recording at a space", and after the space gesture icon 408 appears on the hand, the thumb and the index finger are connected to form a circle, and the other fingers are naturally curved "to assist the user in understanding how to end recording using the space gesture.
In the guide pop-up window 407, the animation display region 407a can automatically play the guide animation by playing and circularly playing the record with the interval on, switching the double and single mirrors with the interval on/off, switching the picture-in-picture with the interval on/off, switching the front and rear shots with the interval on, and ending the record with the interval on. It should be noted that, the playing sequence does not have to be played according to the sequence of opening recording at intervals, switching the double mirrors and the single mirrors at intervals, opening/closing the picture-in-picture at intervals, switching the front and rear lenses at intervals, and ending recording at intervals, but other playing sequences may exist, and the method is not particularly limited herein.
It should be noted that the above-mentioned blank gesture is only an example, and the blank gesture may be other gesture, for example, a "winning" gesture, a "waving hand from left to right", and so on.
In the embodiment of the present application, the electronic device may further receive an operation of sliding the guide window 407 by the user, and in response to the operation, the electronic device may switch the guide animation being played in the animation display region 407a and the prompt information displayed in the text display region 407 b. For example, if the electronic device sequentially plays the corresponding guide animation according to the sequence of the open record of the space, the double mirrors and the single mirrors switched by the space, the picture-in-picture switched by the space, the lenses before and after the switch by the space, and the record of the end of the space. Then, as shown in fig. 10, the electronic device may display the guide animation and the related text explanation of the shots before and after the blank switch in the guide pop-up window 407. The electronic device may receive an operation that the user left-slides the guide popup 407, and in response to the operation, as shown in fig. 9A, the electronic device switches the guide animation displayed in the guide popup 407 to a guide animation recorded at the end of the spaced apart.
It should be noted that, after the electronic device receives the operation of manually switching the animation by the user, the electronic device does not automatically play the plurality of guide animations. That is, once the user manually switches the animation, the electronic device can switch the guide animation played by the guide window 407 again only after receiving the operation of switching the animation again by the user.
The confirmation option 407c is used for the user to end the teaching guidance in advance. For example, the confirmation option 407c may be "known". For example, as shown in fig. 11A, the electronic device may receive an operation in which the user clicks the confirm option 407c, in response to which the electronic device closes the lead window 407, and displays the photographing preview interface 401 as shown in fig. 11B. The shooting preview interface 401 is similar to the shooting preview interface 401 shown in fig. 4, except that: the capture preview interface 401 no longer displays the prompt 404. It should be noted that, when the user plays any one of the guide animations, clicking the confirmation option 407c causes the electronic device to close the guide window 407.
It can be seen that the electronic device can automatically play a plurality of guide animations before the electronic device does not receive the instruction for switching animations or the instruction for ending the guide. And after the electronic equipment receives the instruction for switching the animation, the electronic equipment does not automatically carousel a plurality of guide animations. After receiving the instruction for ending the guidance, the electronic device can end the teaching guidance.
In the embodiment of the present application, if the electronic device does not detect that the user clicks the teaching guide control 403, and the user does not use the blank space gesture, the electronic device may actively display the guide window 407 when entering the multi-mirror video mode for the second time, and actively guide the user to learn the blank space gesture. The description of the guide pop 407 refers to the relevant content in the foregoing, and is not repeated herein. In an alternative embodiment, the electronic device may detect whether the shooting mode was switched in response to the first camera detecting a blank gesture. If the electronic device has switched the capture mode in response to the first camera detecting the over-air gesture, the user may be considered to use the over-air gesture. If the electronic device does not switch the shooting mode in response to the first camera detecting the blank gesture, the user may be considered to not use the blank gesture.
That is, the electronic device may alert the user to view the teaching guidance regarding the blank gesture when entering the multi-mirror video mode for the first time. If the user does not view the teaching guide when the electronic device enters the multi-mirror video mode for the first time (which can be understood as that the electronic device does not detect that the user clicks the teaching guide control 403), the teaching guide about the blank gesture can be automatically played when the electronic device enters the multi-mirror video mode for the second time. Through guiding the user to watch the teaching guide about the blank gesture many times, can promote the user and look over the probability that teaching was guided, make the user learn the blank gesture as far as possible, promote human-computer interaction efficiency.
The guided missile window 407 is displayed when the electronic device first enters a multi-mirror video, and is mainly aimed at guiding the user to learn how to use the blank gesture. However, after learning, the user may not actually use the space-saving gesture frequently. Thus, the electronic device may also guide the user to use the space-apart gesture at an appropriate time.
Specifically, the electronic device may pop up the guide pop window of the blank gesture when the time difference between the time when the blank gesture is detected to be used last time by the user (the time when the electronic device switches the shooting mode in response to the first camera detecting the blank gesture or the first time) and the current time is greater than the first preset time and the user is detected to have a requirement of using the blank gesture, and display a using method of the blank gesture to the user. The electronic equipment is in a recording state under the multi-mirror video recording.
If the time difference between the last time the user used the blank gesture and the current time is greater than the first preset time, the user can be considered to use the blank gesture once, but the user may not use the blank gesture again for a long time due to forgetting to use the blank gesture or forgetting the blank gesture. Under the condition, the user is reminded to use the space-isolation gesture, and influence on user experience caused by frequent reminding of the user to use the space-isolation gesture can be avoided. The blank gesture may be any one of a gesture of blank opening and recording, a gesture of blank switching a double mirror and a single mirror, a gesture of blank opening/closing a picture-in-picture, a gesture of a lens before and after blank switching, and a gesture of blank ending and recording.
In an alternative design, if the electronic device detects that the distance between the user in the front picture and the display screen is greater than or equal to the preset distance within the second preset time after the electronic device is started and recorded, the electronic device can be considered to detect that the user has a need to use the space gesture.
The user in the front-facing picture can be understood as the user photographed by the front-facing camera. The distance between the user and the display screen in the front-facing screen being greater than or equal to the preset distance can be understood that the shooting preview interface of the electronic device is displaying the image shot by the front-facing camera, and the distance between the user and the display screen is greater than or equal to the preset distance. The preset distance may refer to a distance at which the user cannot touch the display screen without reducing the distance between the user and the display screen. For example, the preset distance may be a distance of 60 cm from the display screen when the user records using the selfie stick-holding electronic device.
It can be appreciated that, in the above scenario, if the user switches the shooting mode by touching the shooting mode switching control 406 on the display screen, the view of the front camera is easily affected, so that the quality of the video is affected. Thus, in this case, the user may be considered to have a need to use the air-break gesture, which may be guided.
For example, the electronic device may display the shooting preview interface 401 shown in fig. 12 when detecting that the distance between the user and the display screen in the front-end screen is greater than or equal to the preset distance in the second preset time after the video recording is started. The shooting preview interface 401 is similar to the shooting preview interface 401 shown in fig. 11B, except that: the image 401a and the image 401b in the shooting preview interface 401 are spliced in the left-right direction because the electronic device is placed horizontally. The shot preview interface 401 further includes a guided missile window 412 (also referred to as a first missile window). The guide pop-up window 412 is used to present a blank gesture to guide the user to use the blank gesture. For example, the guide pop-up window 412 may display a gesture for switching the double and single mirrors on and off, a gesture for switching the front and rear lenses on and off, and a gesture for switching the picture-in-picture on and off on a screen.
In an alternative design, if the electronic device detects that the first image is reduced in duty ratio within a second preset time after the electronic device starts recording, where the first image is the area duty ratio of the face area in the first image, it is considered that the user is detected that there is a need to use the space gesture, and the electronic device may display the guide pop window 412 as shown in fig. 12.
In an alternative design, the electronic device may be considered to have detected that a user is in need of using a space-apart gesture when the electronic device is connected to the selfie stick.
The electronic equipment can be connected with the self-timer stick through earphone holes, bluetooth, WIFI and the like. For example, after the electronic device turns on the bluetooth function, a pairing request sent from the camera stick may be received. The pairing request may carry the device name, media access control (media access control, mac) address, and device type identification of the selfie stick. The device type identifier indicates that the device type of the sender of the pairing request is a self-timer stick. Therefore, after the electronic equipment is paired with and connected with the self-timer stick, the self-timer stick can be confirmed to be connected according to the equipment type identifier.
When the electronic device is connected to the selfie stick, it can be considered that the user needs to remotely shoot, in this case, if the user switches shooting modes by touching the shooting mode switching control 406 on the display screen, the framing of the front camera will be affected, so that the quality of the video is affected. Thus, in this case, the user may be considered to have a need to use the air-break gesture, which may be guided.
When the electronic device is connected to the selfie stick, a shooting preview interface 401 and a guide pop-up window 412 as shown in fig. 12 may be displayed. The specific description of the shooting preview interface 401 is referred to above, and is not repeated here.
It should be noted that, when the electronic device is in the recording state, if the connection of the selfie stick is detected, the electronic device may display the guide popup window 412 as shown in fig. 12. In fact, when the electronic device connects to the selfie stick, in response to receiving the first operation of the user, the electronic device starts recording the video, and the electronic device displays the guided missile window 412 on the shooting preview interface 401. That is, the electronic device may first connect to the selfie stick and then display the first pop-up window after the video recording is started. In other words, as long as the time difference between the time when the user last used the space gesture (also referred to as the first time) and the current time is greater than the first preset time, the electronic device can display the guided missile window 412 if the two conditions that the electronic device is recording video and connecting the selfie stick are satisfied.
In an alternative design, when the electronic device detects that the user switches shooting modes on the display screen, the electronic device may be considered to have a need to use a blank gesture.
Specifically, as shown in fig. 13A, the electronic device may receive an operation in which the user clicks the shooting mode switching control 406 (may also be referred to as a second control), and in response to the operation, the electronic device may display a shooting preview interface 401 as shown in fig. 13B. As shown in fig. 13B, the shooting preview interface 401 may include a guide window 412 and a shooting mode selection area 413 (which may also be referred to as a third shot window). Wherein reference is made to the contents of guided missile window 412, and the description thereof is omitted herein. The shooting mode selection area 413 may display preview interfaces of multiple shooting modes, such as a preview interface 413a of a front shooting mode, a preview interface 413b of a rear shooting mode, a preview interface 413c of a picture-in-picture shooting mode, a preview interface 413d of a rear shooting mode, and a preview interface 413e of a front and rear shooting mode.
In summary, when the electronic device is in the recording state of the multi-mirror video, if the time difference between the time when the user last uses the blank gesture and the current time is detected to be greater than the first preset time, and the distance between the user in the front-mounted picture and the display screen is detected to be greater than or equal to the preset distance within the second preset time after the recording is started, the electronic device may display the guide popup window 412 as shown in fig. 12. When the electronic device is in the recording state of the multi-mirror video, if the time difference between the time when the user last uses the blank gesture and the current time is detected to be greater than the first preset time and the electronic device is connected to the selfie stick, the electronic device may display the guide popup window 412 as shown in fig. 12. When the electronic device is in the recording state of the multi-mirror video, if the time difference between the time when the user last uses the blank gesture and the current time is detected to be greater than the first preset time and the user is detected to switch the shooting mode on the display screen, the electronic device may display the guide popup window 412 as shown in fig. 13B.
The above description is only for illustrating the guiding method of the blank gesture when the electronic device is in the front-back shooting mode. In practice, the electronic device may be in any shooting mode under the multi-mirror video recording, for example, a picture-in-picture shooting mode, a front shooting mode, a rear shooting mode, and the like, which is not particularly limited herein.
Therefore, the guide using method of the space-apart gesture provided by the application can remind a user to learn the space-apart gesture when the electronic equipment enters the multi-mirror video for the first time. Particularly, the electronic device can pop up the guide popup window of the air-separation gesture when the time difference between the last time the air-separation gesture is used by the user and the current time is larger than the first preset time and the user is detected to have the requirement of using the air-separation gesture, the using method of the air-separation gesture is displayed for the user, the possibility of using the air-separation gesture by the user is increased, and the man-machine interaction efficiency is improved.
The present application also provides a chip system 1400, as shown in FIG. 14, comprising at least one processor 1401 and at least one interface circuit 1402. The processor 1401 and the interface circuit 1402 may be interconnected by wires. For example, interface circuit 1402 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, interface circuit 1402 may be used to send signals to other devices (e.g., processor 1401).
For example, the interface circuit 1402 may read instructions stored in a memory in the electronic device and send the instructions to the processor 1401. The instructions, when executed by the processor 1401, may cause the electronic device to perform the steps of the various embodiments described above.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard disk, read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is merely a specific implementation of the embodiment of the present application, but the protection scope of the embodiment of the present application is not limited to this, and any changes or substitutions within the technical scope disclosed in the embodiment of the present application should be covered in the protection scope of the embodiment of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (21)

1. A method of guided use of a blank gesture, the method being applied to an electronic device comprising a display screen, a first camera, and a second camera, the first camera and the second camera being located on different sides of the display screen, the first camera and the display screen being located on the same side of the electronic device, the method comprising:
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time;
if the time difference between the first time and the current time is greater than a first preset time, responding to the detection that the electronic equipment is recording video, connecting the electronic equipment with a selfie stick, displaying a first popup window on a shooting preview interface by the electronic equipment, wherein the first popup window comprises a gesture identification, the gesture identification is used for guiding a user to switch a shooting mode by using a spaced gesture, and the gesture identification comprises: the electronic device comprises a first gesture identifier, a second gesture identifier and a third gesture identifier, wherein the first gesture identifier indicates a gesture moving to two sides, the second gesture identifier indicates a gesture of turning over a palm, the third gesture identifier indicates a gesture of making a fist, and the first time is the time when the electronic device last responds to the first camera to detect a blank gesture and switch a shooting mode.
2. The method of claim 1, wherein in response to detecting that the electronic device is recording video, the electronic device connects to a selfie stick, the electronic device displays a first pop-up window in the shooting preview interface, comprising:
in response to receiving a first operation of a user, the electronic device begins recording video;
in response to detecting connection of the selfie stick, the electronic device displays the first popup window on the shooting preview interface.
3. The method of claim 1, wherein in response to detecting that the electronic device is recording video, the electronic device connects to a selfie stick, the electronic device displays a first pop-up window in the shooting preview interface, comprising:
the electronic equipment is connected with the selfie stick;
and in response to receiving a first operation of a user, the electronic equipment starts to record video, and the electronic equipment displays the first popup window on the shooting preview interface.
4. A method according to any of claims 1-3, wherein the shooting preview interface includes a first control, the method further comprising:
responding to the operation of the user on the first control, and displaying a second popup window on the shooting preview interface by the electronic equipment;
And the electronic equipment circularly plays a plurality of guide videos in the second popup window according to a preset sequence, and each guide video in the plurality of guide videos is used for displaying a using method of a space gesture.
5. The method according to claim 4, wherein the method further comprises:
responding to the operation of sliding the second popup window left by a user, displaying a last guide video adjacent to the first video being played on the second popup window by the electronic equipment, and stopping cyclic playing;
or, in response to the operation of the user to slide the second popup window right, the electronic device displays the next guiding video adjacent to the first video on the second popup window and stops the cyclic playing.
6. The method of claim 4, wherein the second popup window includes a first display area and a second display area, the first display area is used for circularly playing the plurality of guiding videos, the second display area is used for circularly displaying a plurality of prompting messages corresponding to the plurality of guiding videos, the plurality of guiding videos are in one-to-one correspondence with the plurality of prompting messages, each prompting message is used for explaining a function and a use method of a blank gesture displayed by the corresponding guiding video, and the guiding video being displayed by the first display area corresponds to the prompting message being displayed by the second display area.
7. The method of claim 4, wherein the second popup window includes a confirmation option, the method further comprising:
and responding to the operation of the user on the confirmation option, and closing the second popup window by the electronic equipment.
8. The method according to claim 4, wherein the method further comprises:
if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture to switch the shooting mode, the electronic equipment displays the second popup window on the shooting preview interface.
9. The method of claim 4, wherein the capture preview interface further comprises a guide prompt disposed at a preset area of the first control, the guide prompt being configured to instruct a user to click on the first control to view a guide video.
10. The method of any of claims 1-3, wherein the capture preview interface further comprises a second control, the method further comprising:
in response to detecting the operation of the user on the second control, the electronic device displays a third popup window on the shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes;
If the electronic equipment is recording video, the time difference between the first time and the current time is larger than a first preset time, the electronic equipment displays the first popup window on the shooting preview interface, and the first popup window and the third popup window are not overlapped.
11. A method of guided use of a blank gesture, the method being applied to an electronic device comprising a display screen, a first camera, and a second camera, the first camera and the second camera being located on different sides of the display screen, the first camera and the display screen being located on the same side of the electronic device, the method comprising:
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time;
in response to receiving a first operation of a user, the electronic device begins recording video;
if the time difference between the first time and the current time is greater than a first preset time, in response to detecting that the distance between the user shot by the first camera and the display screen is greater than or equal to a preset distance in a second preset time after video recording begins, the electronic device displays a first popup window on the shooting preview interface, the first popup window comprises a gesture identification, the gesture identification is used for guiding the user to switch shooting modes by using a spaced gesture, and the gesture identification comprises: the electronic device comprises a first gesture identifier, a second gesture identifier and a third gesture identifier, wherein the first gesture identifier indicates a gesture moving to two sides, the second gesture identifier indicates a gesture of turning over a palm, the third gesture identifier indicates a gesture of making a fist, and the first time is the time when the electronic device last responds to the first camera to detect a blank gesture and switches shooting modes.
12. The method of claim 11, wherein the capture preview interface includes a first control, the method further comprising:
responding to the operation of the user on the first control, and displaying a second popup window on the shooting preview interface by the electronic equipment;
and the electronic equipment circularly plays a plurality of guide videos in the second popup window according to a preset sequence, and each guide video in the plurality of guide videos is used for displaying a using method of a space gesture.
13. The method according to claim 12, wherein the method further comprises:
responding to the operation of sliding the second popup window left by a user, displaying a last guide video adjacent to the first video being played on the second popup window by the electronic equipment, and stopping cyclic playing;
or, in response to the operation of the user to slide the second popup window right, the electronic device displays the next guiding video adjacent to the first video on the second popup window and stops the cyclic playing.
14. The method of claim 12, wherein the second popup window includes a first display area and a second display area, the first display area is used for circularly playing the plurality of guiding videos, the second display area is used for circularly displaying a plurality of prompting messages corresponding to the plurality of guiding videos, the plurality of guiding videos are in one-to-one correspondence with the plurality of prompting messages, each prompting message is used for explaining a function and a use method of a blank gesture displayed by the corresponding guiding video, and the guiding video being displayed by the first display area corresponds to the prompting message being displayed by the second display area.
15. The method of claim 12, wherein the second popup window includes a confirmation option, the method further comprising:
and responding to the operation of the user on the confirmation option, and closing the second popup window by the electronic equipment.
16. The method according to any one of claims 12-15, characterized in that the method further comprises:
and if the electronic equipment detects that the user does not click the first control and does not respond to the first camera to detect the blank gesture to switch the shooting mode, the electronic equipment displays the second popup window on the shooting preview interface.
17. The method of claim 16, wherein the capture preview interface further comprises a guide prompt disposed at a preset area of the first control, the guide prompt for instructing a user to click on the first control to view a guide video.
18. The method of any of claims 12-15, wherein the capture preview interface further comprises a second control, the method further comprising:
in response to detecting the operation of the user on the second control, the electronic device displays a third popup window on the shooting preview interface, wherein the third popup window comprises preview interfaces of multiple shooting modes;
If the electronic equipment is recording video, the time difference between the first time and the current time is larger than a first preset time, the electronic equipment displays the first popup window on the shooting preview interface, and the first popup window and the third popup window are not overlapped.
19. A method of guided use of a blank gesture, the method being applied to an electronic device comprising a display screen, a first camera, and a second camera, the first camera and the second camera being located on different sides of the display screen, the first camera and the display screen being located on the same side of the electronic device, the method comprising:
the electronic equipment displays a shooting preview interface, wherein the shooting preview interface comprises at least one of a first image and a second image, the first image is an image acquired by the first camera in real time, and the second image is an image acquired by the second camera in real time;
in response to receiving a first operation of a user, the electronic device begins recording video;
if the time difference between the first time and the current time is greater than a first preset time, in response to detecting that the first image duty ratio is reduced in a second preset time after video recording is started, the electronic device displays a first popup window on the shooting preview interface, the first popup window comprises a gesture identifier, the gesture identifier is used for guiding a user to switch shooting modes by using a spaced gesture, and the gesture identifier comprises: the electronic device comprises a first gesture identifier, a second gesture identifier and a third gesture identifier, wherein the first gesture identifier indicates a gesture moving to two sides, the second gesture identifier indicates a gesture of turning over a palm, the third gesture identifier indicates a gesture of making a fist, the first time is the time when the electronic device last responds to the first camera to detect a blank gesture and switches a shooting mode, and the first image occupation ratio is the area occupation ratio of a face area in the first image.
20. An electronic device comprising a display screen, a first camera, a second camera, and a processor, the processor and memory coupled, the memory storing program instructions that, when executed by the processor, cause the electronic device to implement the method of any one of claims 1-19.
21. A computer-readable storage medium comprising computer instructions;
the computer instructions, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1-19.
CN202111679527.8A 2021-06-16 2021-12-31 Guide use method of air separation gesture and electronic equipment Active CN115484394B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110676709 2021-06-16
CN2021106767093 2021-06-16
CN2021114363119 2021-11-29
CN202111436311 2021-11-29

Publications (2)

Publication Number Publication Date
CN115484394A CN115484394A (en) 2022-12-16
CN115484394B true CN115484394B (en) 2023-11-14

Family

ID=84420576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111679527.8A Active CN115484394B (en) 2021-06-16 2021-12-31 Guide use method of air separation gesture and electronic equipment

Country Status (1)

Country Link
CN (1) CN115484394B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3012732A1 (en) * 2014-10-24 2016-04-27 LG Electronics Inc. Mobile terminal and controlling method thereof
CN106055098A (en) * 2016-05-24 2016-10-26 北京小米移动软件有限公司 Air gesture operation method and apparatus
CN106250021A (en) * 2016-07-29 2016-12-21 维沃移动通信有限公司 A kind of control method taken pictures and mobile terminal
CN107613207A (en) * 2017-09-29 2018-01-19 努比亚技术有限公司 A kind of camera control method, equipment and computer-readable recording medium
CN110045819A (en) * 2019-03-01 2019-07-23 华为技术有限公司 A kind of gesture processing method and equipment
US10551995B1 (en) * 2013-09-26 2020-02-04 Twitter, Inc. Overlay user interface
CN111787223A (en) * 2020-06-30 2020-10-16 维沃移动通信有限公司 Video shooting method and device and electronic equipment
CN112394811A (en) * 2019-08-19 2021-02-23 华为技术有限公司 Interaction method for air-separating gesture and electronic equipment
CN112929558A (en) * 2019-12-06 2021-06-08 荣耀终端有限公司 Image processing method and electronic device
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8830302B2 (en) * 2011-08-24 2014-09-09 Lg Electronics Inc. Gesture-based user interface method and apparatus

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10551995B1 (en) * 2013-09-26 2020-02-04 Twitter, Inc. Overlay user interface
EP3012732A1 (en) * 2014-10-24 2016-04-27 LG Electronics Inc. Mobile terminal and controlling method thereof
CN106055098A (en) * 2016-05-24 2016-10-26 北京小米移动软件有限公司 Air gesture operation method and apparatus
CN106250021A (en) * 2016-07-29 2016-12-21 维沃移动通信有限公司 A kind of control method taken pictures and mobile terminal
CN107613207A (en) * 2017-09-29 2018-01-19 努比亚技术有限公司 A kind of camera control method, equipment and computer-readable recording medium
CN110045819A (en) * 2019-03-01 2019-07-23 华为技术有限公司 A kind of gesture processing method and equipment
CN112954218A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN112394811A (en) * 2019-08-19 2021-02-23 华为技术有限公司 Interaction method for air-separating gesture and electronic equipment
CN112929558A (en) * 2019-12-06 2021-06-08 荣耀终端有限公司 Image processing method and electronic device
CN111787223A (en) * 2020-06-30 2020-10-16 维沃移动通信有限公司 Video shooting method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鼠标手势的工效学研究进展;刘子慧;陈硕;;人类工效学(第01期);全文 *

Also Published As

Publication number Publication date
CN115484394A (en) 2022-12-16

Similar Documents

Publication Publication Date Title
CN110839096B (en) Touch method of equipment with folding screen and folding screen equipment
WO2021000881A1 (en) Screen splitting method and electronic device
CN112887583B (en) Shooting method and electronic equipment
KR20180095331A (en) Mobile terminal and method for controlling the same
KR20180019392A (en) Mobile terminal and method for controlling the same
KR102240639B1 (en) Glass type terminal and control method thereof
CN112383664B (en) Device control method, first terminal device, second terminal device and computer readable storage medium
WO2022262475A1 (en) Image capture method, graphical user interface, and electronic device
CN112068907A (en) Interface display method and electronic equipment
CN113010076A (en) Display element display method and electronic equipment
US20230119849A1 (en) Three-dimensional interface control method and terminal
EP4207744A1 (en) Video photographing method and electronic device
CN112637477A (en) Image processing method and electronic equipment
KR20180017638A (en) Mobile terminal and method for controlling the same
CN115484387B (en) Prompting method and electronic equipment
KR102086348B1 (en) Mobile terminal and method for controlling the same
CN115484394B (en) Guide use method of air separation gesture and electronic equipment
CN115129211A (en) Method and device for generating multimedia file, electronic equipment and storage medium
CN115484393B (en) Abnormality prompting method and electronic equipment
CN115484391B (en) Shooting method and electronic equipment
CN115484390B (en) Video shooting method and electronic equipment
CN115484392B (en) Video shooting method and electronic equipment
EP4380175A1 (en) Video recording method and apparatus, and storage medium
KR102135377B1 (en) Mobile terminal and method for controlling the same
KR20150115627A (en) Mobile terminal and control method for the mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant