CN117714568A - Screen display method, device and storage medium - Google Patents

Screen display method, device and storage medium Download PDF

Info

Publication number
CN117714568A
CN117714568A CN202310750090.5A CN202310750090A CN117714568A CN 117714568 A CN117714568 A CN 117714568A CN 202310750090 A CN202310750090 A CN 202310750090A CN 117714568 A CN117714568 A CN 117714568A
Authority
CN
China
Prior art keywords
screen
camera
user
electronic equipment
main screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310750090.5A
Other languages
Chinese (zh)
Inventor
李春杰
王国英
牛群超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202310750090.5A priority Critical patent/CN117714568A/en
Publication of CN117714568A publication Critical patent/CN117714568A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The embodiment of the application provides a screen display method, equipment and storage medium, which are applied to electronic equipment with a foldable screen, wherein the foldable screen can be folded outwards, when the foldable screen is in a folded state, the foldable screen is divided into a main screen and a secondary screen, a first camera is positioned on the front surface of the main screen, and a second camera is positioned on the back surface of the main screen, and the method comprises the following steps: at a first moment, the foldable screen is in a folded state, the main screen faces the user, the picture acquired by the first camera or the second camera is displayed on the main screen, and the auxiliary screen is in a screen-off state; at a second moment, the user starts to turn over the electronic equipment, and when the electronic equipment turns over to the state that the auxiliary screen faces the user, the auxiliary screen displays a picture acquired by the second camera, and the main screen is in a screen-off state; wherein the second time is later than the first time. By the method, after the mobile phone is turned over, the user can view the rear-mounted self-timer picture on the auxiliary screen, and photographing experience of the user is improved.

Description

Screen display method, device and storage medium
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a screen display method, device, and storage medium.
Background
The out-folded screen mobile phone refers to a mobile phone with a screen capable of being folded outwards, and illustratively, the out-folded screen mobile phone 100 shown in fig. 1 includes a main screen 101 and an auxiliary screen 102, a camera 103 (front camera) is disposed on the main screen 101, and a camera module 104 (rear camera) is disposed on the back of the main screen 101. When the screen of the out-folded mobile phone 100 is in a folded state, the user can light the main screen 101, start a camera application on the display interface of the main screen 101 to shoot, the shot picture is displayed on the main screen 101, and in the shooting process, if the user does not expand the screen, the auxiliary screen 102 is always in a black screen state. Similar to the bar phone, if the user uses the camera module 104 of the out-folded mobile phone 100 to perform self-photographing, the user cannot view the self-photographing picture during photographing.
Disclosure of Invention
The embodiment of the application provides a screen display method, equipment and a storage medium, which enable a user to view a rear-mounted self-timer picture on a secondary screen after turning over a mobile phone.
In a first aspect, an embodiment of the present application provides a screen display method, applied to an electronic device having a foldable screen, where the foldable screen can be folded outwards, and when the foldable screen is in a folded state, the foldable screen is divided into a main screen and a sub-screen, a first camera is located on a front surface of the main screen, and a second camera is located on a back surface of the main screen, where the method includes: at a first moment, the foldable screen is in a folded state, the main screen faces the user, the picture acquired by the first camera or the second camera is displayed on the main screen, and the auxiliary screen is in a screen-off state; at a second moment, the user starts to turn over the electronic equipment, and when the electronic equipment turns over to the state that the auxiliary screen faces the user, the auxiliary screen displays a picture acquired by the second camera, and the main screen is in a screen-off state; the second time is later than the first time.
For example, referring to fig. 2, the electronic device with a foldable screen is an out-folded screen mobile phone, and when the out-folded screen mobile phone is in a folded state, the screen of the out-folded screen mobile phone is divided into a main screen and a sub-screen. The first camera is a front camera and is arranged on the main screen, such as the camera 103 shown in fig. 2; the second camera is a rear camera and is disposed at the back of the main screen, such as the camera module 104 shown in fig. 2.
The first moment the picture that the first camera gathered is displayed on the main screen, after the user turns over the mobile phone, the picture that the secondary screen gathered of second camera is displayed, namely the user uses the front-end camera to self-shoot at the first moment, after the mobile phone is turned over, the user uses the rear-end camera to self-shoot, and the front-end self-shoot is switched to the rear-end self-shoot.
In another case, the picture collected by the second camera is displayed on the main screen at the first moment, after the user turns over the mobile phone, the picture collected by the second camera is displayed on the auxiliary screen, namely, the user shoots a scene by using the rear camera at the first moment, and after the mobile phone is turned over, the user self-timer by using the rear camera, so that the rear shooting is switched and the rear self-timer is realized.
In the scheme, the user can realize the rear-mounted self-timer function through the overturning mobile phone, namely, the auxiliary screen can display the self-timer picture acquired by the rear-mounted camera, and the photographing experience of the user can be improved.
In an alternative embodiment of the first aspect, the method further comprises:
and responding to a first operation of the user, wherein the first operation is used for triggering the second camera to conduct preview self-timer, the main screen plays a first animation, and the first animation is used for guiding the user to turn over the electronic equipment.
In one example, the first operation is an interface touch operation. For example, referring to fig. 3, the first operation may be a click operation of the control 3013 on the interface 301 by the user, in response to which the home screen starts playing the guide animation. For another example, referring to fig. 4, the first operation may be a click operation of the icon 4011 on the interface 401 by the user, and in response to the operation, the home screen starts playing the first guide animation.
In another example, the first operation is a voice control operation, for example, after the user speaks a "post self-timer" voice command, the main screen starts playing the guide animation.
In an alternative embodiment of the first aspect, the method further comprises: responding to the operation of starting a first photographing switch by a user, and starting a first photographing function by the electronic equipment; the first photographing function is used for starting the second camera to conduct preview self-timer after the electronic equipment is turned over. For example, referring to fig. 3, the first photographing switch may be the switch 3021 on the interface 302 shown in fig. 3, and the first photographing function is a "flip phone post self-timer" function.
In one example, the electronic device turns on the flip detection in response to a user's operation to turn on the first photographing switch.
In one example, in response to a user's operation to turn on the first photographing switch, the electronic device turns on the flip detection and the touch panel grip detection.
In the above scheme, the user can control the on or off of the first photographing function through the switch.
In an optional embodiment of the first aspect, in a case that a picture acquired by the first camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to be overturned, the electronic equipment closes the first camera and opens the second camera; the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
In the scheme, a user performs front self-timer shooting on the main screen at the first moment, when the electronic equipment is detected to overturn, the electronic equipment is switched to the rear camera from the front camera, and when the face of the person is detected in a picture acquired by the rear camera, the electronic equipment is switched to the auxiliary screen display from the main screen display. The front-mounted self-timer switching and the rear-mounted self-timer are realized by combining the overturn detection and the rear-mounted face detection.
In an optional embodiment of the first aspect, in a case that a picture acquired by the first camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to be overturned, the electronic equipment closes the first camera and opens the second camera; the electronic equipment determines the state of holding the screen by the user according to the detection data of the touch panel; when the user holds the main screen, the electronic equipment starts the auxiliary screen and closes the main screen.
In the above scheme, when the user performs front-end self-timer shooting on the main screen at the first moment and detects that the electronic equipment is overturned, the electronic equipment is switched to the rear-end camera by the front-end camera, if the user is determined to hold the main screen based on the detection data of the touch panel, namely, the auxiliary screen is determined to face the user, the face detection is not needed, and the display of the main screen is directly switched to the display of the auxiliary screen. Through the combination of overturn detection and touch panel detection, the front-end self-timer is switched to the rear-end self-timer, and the switching speed of the screen is faster because no face detection is needed, so that the photographing experience of a user can be improved.
In an alternative embodiment of the first aspect, the method comprises: when the state that the user holds the screen cannot be determined, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
In the above scheme, the user performs front self-timer on the main screen at the first moment, detects the electronic equipment to turn over, and if the user cannot be determined to hold the main screen or the auxiliary screen based on the detection data of the touch panel, the user can further combine with the rear face detection to determine whether the face of the person exists in the picture acquired by the rear camera, so that when the face of the person is detected, the auxiliary screen display is switched.
In an alternative embodiment of the first aspect, the method further comprises: when the face of the person in the picture acquired by the second camera is not detected, the electronic equipment starts the first camera and closes the second camera.
In the above scheme, if the electronic equipment is detected to overturn, but the rear camera does not detect the face, the front camera can be restarted, the rear camera is closed, at the moment, the screen is not switched, and the main screen displays the picture acquired by the front camera, namely, the front self-timer still keeps. In this way, the case of screen switching due to the user's false rollover can be avoided.
In an optional embodiment of the first aspect, in a case that a picture acquired by the second camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to overturn, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
When the detection equipment is detected to be overturned, the second camera is kept on.
In the above scheme, the user shoots the rear scene on the main screen at the first moment, when the electronic equipment is detected to overturn, the rear camera of the electronic equipment is kept on, and when the face of the person is detected in the picture acquired by the rear camera, the electronic equipment is switched from the main screen display to the auxiliary screen display. By combining the overturn detection and the rear face detection, the rear shooting switching rear self-timer is realized.
In an optional embodiment of the first aspect, in a case that a picture acquired by the second camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to overturn, the electronic equipment determines the state of holding the screen by a user according to the detection data of the touch panel; when the user holds the main screen, the electronic equipment starts the auxiliary screen and closes the main screen.
When the detection equipment is detected to be overturned, the second camera is kept on.
In the above scheme, when the user shoots the rear scene on the main screen at the first moment and the electronic equipment is detected to turn over, the electronic equipment keeps the rear camera on, and if the user is determined to hold the main screen based on the detection data of the touch panel, namely, the auxiliary screen is determined to face the user, the face detection is not needed, and the display of the main screen is directly switched to the display of the auxiliary screen. Through the combination of overturn detection and touch panel detection, the rear-end shooting is switched and the rear-end self-timer is realized, and the switching speed of the screen is faster because no face detection is needed, so that the shooting experience of a user can be improved.
In an alternative embodiment of the first aspect, the method comprises: when the state that the user holds the screen cannot be determined, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
In the above scheme, the user performs the rear scene shooting on the main screen at the first moment, detects the turning of the electronic equipment, and if the user cannot be determined to hold the main screen or the auxiliary screen based on the detection data of the touch panel, the rear face detection can be further combined to determine whether the face of the person exists in the picture acquired by the rear camera, so that the auxiliary screen display can be switched when the face of the person is detected.
In an optional embodiment of the first aspect, the electronic device opens the secondary screen, including: the electronic equipment starts the auxiliary screen and a touch panel on the auxiliary screen; the electronic device closes the main screen, including: the electronic device closes the main screen and the touch panel on the main screen.
In the scheme, the auxiliary screen is started and the touch panel on the auxiliary screen is started at the same time, so that the subsequent touch panel detects the touch operation of a user on the auxiliary screen. And closing the touch panel on the main screen while closing the main screen so as to reduce the power consumption of the equipment.
In an optional embodiment of the first aspect, in a case that a picture acquired by the first camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to be overturned, the first camera is kept on; the electronic equipment detects whether the face, the number of the faces and the size of the faces exist in a picture acquired by the first camera; if any one of the first conditions is met, the electronic equipment starts the auxiliary screen and the second camera, and closes the main screen and the front camera.
Wherein the first condition comprises: no person face is identified; recognizing faces of people, wherein the number of the faces is larger than a preset number (such as larger than 1); if the face of the person is recognized, and the proportion of the face size occupying the picture is smaller than the preset proportion.
In the scheme, the front-mounted self-timer switching and rear-mounted self-timer is realized by combining the overturn detection and the front-mounted face detection.
In a second aspect, an embodiment of the present application provides an electronic device, including:
a foldable screen, which is divided into a main screen and a sub-screen when the foldable screen is in a folded state; a touch sensor, an acceleration sensor, and a gyro sensor; a memory and at least one processor; the foldable screen, the touch sensor, the acceleration sensor, the gyroscope sensor and the memory are connected with the processor; the memory is for storing computer program code, the computer program code comprising instructions; the instructions, when executed by at least one processor, cause an electronic device to perform the method of any of the first aspects.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements a method as in any of the first aspects.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when run, causes a computer to perform the method as in any of the first aspects.
In a fifth aspect, embodiments of the present application provide a chip comprising a processor for invoking a computer program in memory to perform a method as in any of the first aspects.
It should be understood that, the second aspect to the fifth aspect of the present application correspond to the technical solutions of the first aspect of the present application, and the beneficial effects obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic diagram of various physical states of an out-folding mobile phone according to an embodiment of the present application;
fig. 2 is a schematic diagram of a mobile phone state change of a mobile phone with a turned external folding screen according to an embodiment of the present application;
fig. 3 is a schematic diagram of a mobile phone state change of a mobile phone with a turned external folding screen according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a mobile phone state change of a mobile phone with a turned external folding screen according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a software structural block diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an external folding screen mobile phone according to an embodiment of the present application;
fig. 8 is a flowchart of a screen display method according to an embodiment of the present application;
fig. 9 is a flowchart of a screen display method according to an embodiment of the present application;
fig. 10 is a schematic diagram of a mobile phone state change of a flip out-folded mobile phone according to an embodiment of the present application;
fig. 11 is a flowchart of a screen display method according to an embodiment of the present application;
fig. 12 is a flowchart of a screen display method according to an embodiment of the present application;
fig. 13 is a flowchart of a screen display method according to an embodiment of the present application;
fig. 14 is a flowchart of a screen display method according to an embodiment of the present application.
Detailed Description
For the sake of clarity in describing the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish between identical items or similar items having substantially identical functions and actions. For example, the first subscription response and the second subscription response are merely for distinguishing between different subscription responses, and are not limited in their order of precedence. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association relationship of associated objects, meaning that there may be three relationships, e.g., a and/or B, may mean that a exists alone, a and B exist together, and B exists alone, where a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b or c) of a, b, c, a-b, a-c, b-c or a-b-c may be represented, wherein a, b, c may be single or plural.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, user face information, etc.) and the data (including, but not limited to, face data for analysis, storage, presentation, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data is required to comply with the related laws and regulations and standards of the related country and region, and is provided with a corresponding operation portal for the user to select authorization or rejection.
The screen display method provided by the embodiment of the application can be applied to the electronic equipment with the foldable screen. The user can unfold or fold the screen when using the electronic device with a foldable screen, i.e., the screen can be opened and closed. The screen of the electronic device may include, but is not limited to,: an outer folding screen, an inner folding screen, a scroll screen, a diagonal folding screen and the like. The screen of the electronic device may be folded at least once, and the following embodiments will be described by taking an example in which the screen of the electronic device is folded once.
The screen may include three physical states, a folded state, an intermediate state, and an unfolded state, respectively, during the opening and closing process. In fig. 1, an external folding mobile phone is taken as an example, and the external folding mobile phone is in a folded state, an intermediate state and an unfolded state sequentially. The screen shown in a in fig. 1 is in a folded state, the screen shown in b in fig. 1 is in an intermediate state, and the screen shown in c in fig. 1 is in an unfolded state. In some embodiments, the state of the screen shown in b in fig. 1 may also be referred to as a stand state, which is not limited by embodiments of the present application.
The electronic device in the embodiment of the present application may be referred to as a User Equipment (UE), a terminal (terminal), and the like, for example, the electronic device may be a mobile phone, a tablet (portable android device, PAD), a personal digital assistant (personal digital assistant, PDA), a handheld device with a wireless communication function, a computing device, a vehicle-mounted device, or a wearable device, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in an industrial control (industrial control), a wireless terminal in a smart home (smart home), and the like, which has a foldable screen, and the form of the electronic device is not particularly limited in the embodiment of the present application.
In the embodiment of the application, the foldable screen of the electronic device can be displayed as a complete display area in the unfolded state, and the user can fold the screen along one or more folding lines in the foldable screen. After the user folds the screen along the folding line in the foldable screen, the foldable screen may be divided into two display areas along the folding line, as shown with reference to fig. 1 (c), a display area of the screen on the right side of the folding line (may be referred to as a main screen) and a display area of the screen on the left side of the folding line (may be referred to as a sub-screen).
It should be noted that, the primary screen and the secondary screen may be physically the same screen or may be two screens, and the embodiments of the present application are not limited in any way, and the division of the primary screen and the secondary screen in the embodiments of the present application is only convenient for description, and is not limited by primary use or secondary use.
For easy understanding, the following embodiments will be described by taking an electronic device with a foldable screen as an out-folded screen mobile phone as an example.
In some embodiments, as shown in fig. 2 a, a user holds the folded out-folded mobile phone, the main screen 101 of the out-folded mobile phone faces the user, at this time, the user may light the main screen 101, start a camera application on the main screen 101, take a picture through the camera module 104, and further may switch to the camera 103 to perform self-shooting by clicking the control 1011, where the secondary screen 102 of the out-folded mobile phone is in a screen-off state facing away from the user, and if the user does not expand the screen, the secondary screen 102 is always in a screen-off state. The off-screen state refers to a state in which the screen is not lighted, and may also be referred to as an off-screen state or a black screen state. In the above embodiment, the user can preview and self-shoot only through the front camera of the out-folded mobile phone, that is, the camera 103, and cannot preview and self-shoot by using the rear camera of the out-folded mobile phone. The preview self-timer can be understood as that a user can view a self-timer picture during self-timer.
In order to fully utilize the screen of the out-folded screen mobile phone, the photographing experience of a user is improved, in some embodiments, under the condition that the screen of the out-folded screen mobile phone is not unfolded, namely, the screen of the out-folded screen mobile phone is in a folded state, the user can preview and self-photograph by using the front camera of the out-folded screen mobile phone, a preview picture is displayed on a main screen, at the moment, the main screen of the out-folded screen mobile phone faces the user, when the user turns over the mobile phone, a mobile phone secondary screen faces the user, the out-folded screen mobile phone detects that the user turns over the mobile phone, the mobile phone can start a rear camera, close the front camera, close the main screen to display and start the secondary screen to display, the rear camera is used for self-photographing under the condition that the folding state of the mobile phone is not changed, and the self-photographing picture is displayed on the secondary screen, so that the photographing experience of the user can be improved.
For example, as shown in fig. 2 a, the user holds the folding-out mobile phone, the main screen 101 faces the user, the user can use the camera 103 to preview and self-shoot, the main screen 101 displays a preview screen, and at this time, the sub-screen 102 is in a screen-off state; subsequently, the user turns over the handset, as shown in fig. 2 b, which is an intermediate state of turning over the handset; after the mobile phone is turned over, as shown in fig. 2 c, the secondary screen 102 faces the user, the mobile phone turns on the camera module 104, the secondary screen 102 displays the picture acquired by the camera module 104, and simultaneously, the camera 103 and the primary screen 101 are turned off. The user can turn the handset in the direction shown in fig. 2, or in the opposite direction to that shown in fig. 2.
In order to realize the photographing function, a user can manually start the function of 'turning the mobile phone to post the self-photographing' after starting the camera application, so that the mobile phone can detect the state of the user holding the mobile phone, the position and the posture detection of the mobile phone (such as whether the mobile phone is turned over) and the like in real time, and automatically switch the camera and the screen display after detecting that the user turns over the mobile phone, for example, switch from a front camera to a rear camera and switch from the main screen display to the auxiliary screen display.
For example, as shown in fig. 3, the user currently uses the front camera 3012 of the out-folded mobile phone to perform self-timer shooting, the main screen of the out-folded mobile phone displays an interface 301, a self-timer picture is displayed on the interface 301, a setting control 3011 is further displayed on the interface 301, and the user can start the "reverse mobile phone rear self-timer shooting" function through the setting control 3011. Specifically, in response to a click operation of the setting control 3011 of the interface 301 by the user, the main screen of the out-folded screen mobile phone displays an interface 302, and a switch 3021 for "flip mobile phone rear-mounted self-timer" function is displayed on the interface 302; in response to a click operation of the switch 3021 of the interface 302 by the user, the out-turned screen mobile phone turns on the "flip mobile phone post self-timer" function. After the user clicks the return control 3022 of the interface 302, the main screen of the out-folded mobile phone redisplays the interface 301, and at this time, the mobile phone has already been started for real-time state detection, including detecting the state that the user holds the mobile phone, detecting the pose of the mobile phone, and so on.
In response to a click operation of the control 3013 for switching the camera on the interface 301 by the user, the main screen of the out-folded screen mobile phone plays an operation guide animation so that the user learns how to operate the mobile phone for post self-shooting. After the user turns over the mobile phone according to the operation guiding animation, the interface 303 is displayed on the secondary screen, the self-timer picture acquired by the rear camera module 3012 is displayed on the interface 303, at this time, the front camera 3014 is turned off, and the primary screen display is turned off.
In some embodiments, the user turns on the "flip phone post self-timer" function for the first time, and after clicking the control for switching the camera, the guide animation is played, and the guide animation may not be played in the subsequent photographing process.
In some embodiments, during a subsequent photographing process, for example, when the user previews the self-timer on the interface 303 shown in fig. 3, the user wants to switch back to the front-end self-timer, in one example, the user may click the control 3031 on the interface 303 and then start turning over the mobile phone, during the turning over process, the secondary screen may play the guide animation, and after detecting that the turning over is completed, the secondary screen is deactivated. In another example, the user may not click on the control 3031 on the interface 303 any more, directly start turning over the mobile phone, and during the turning over process, the secondary screen may play the guide animation, and after detecting that the turning over is completed, the secondary screen is turned off.
For example, as shown in fig. 4, the user currently uses the front camera 4012 of the out-folded mobile phone to perform self-timer, the main screen of the out-folded mobile phone displays an interface 401, a self-timer screen is displayed on the interface 401, an icon 4011 is also displayed on the interface 401, at this time, the rear camera module 4013 is not opened, and the auxiliary screen is in a screen-off state. The user can click on the icon 4011, the out-folded mobile phone can turn on the rear camera module 4013 and turn off the front camera 4012, at this time, the interface 401 starts playing the guide animation, and the user turns over the mobile phone based on the guide animation. When the interface 401 starts playing the guide animation, the sub-screen is turned on, and the screen acquired by the rear camera module 4013 is displayed. After the user finishes turning over, the face of the user is displayed in the sub-screen display interface 402. The icon 4011 in this example is used to quickly switch camera and home screen displays.
Before introducing the screen display method provided by the embodiment of the application, a hardware structure of the electronic device is first described. For example, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 5, the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. It is to be understood that the structure illustrated in the present embodiment does not constitute a specific limitation on the electronic apparatus 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, a display processing unit (display process unit, DPU), and/or a neural-network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors. In some embodiments, the electronic device 100 may also include one or more processors 110. The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the system of the electronic device 100.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others. The USB interface 130 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN), bluetooth, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication (near field communication, NFC), infrared (IR), etc. applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 may implement display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
Electronic device 100 may implement shooting functionality through an ISP, one or more cameras 193, video codecs, a GPU, one or more display screens 194, an application processor, and the like.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, data files such as music, photos, videos, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may cause the electronic device 100 to execute various functional applications, data processing, and the like by executing the above-described instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage area may also store one or more applications (e.g., gallery, contacts, etc.), and so forth. The storage data area may store data created during use of the electronic device 100 (e.g., photos, contacts, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. In some embodiments, the processor 110 may cause the electronic device 100 to perform various functional applications and data processing by executing instructions stored in the internal memory 121, and/or instructions stored in a memory provided in the processor 110.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The earphone interface 170D may be a USB interface 130, or may be a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, or may be a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The sensor 180 may include a pressure sensor, a gyroscope sensor 180B, a barometric sensor, a magnetic sensor, an acceleration sensor (acceleration transducer, ACC) 180E, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor 180K, an ambient light sensor, a bone conduction sensor, and the like.
The gyro sensor (gyro) 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for recognizing the gesture of the electronic equipment, and is applied to camera switching, screen display and the like.
The touch sensor 180K may also be referred to as a touch panel. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also referred to as a touch screen. The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1.
The software system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, a software system of a hierarchical architecture is taken as an Android system as an example, and a software structure of electronic equipment is illustrated. The layered architecture divides the software system of the electronic device into several layers, each of which has a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, an application layer (applications), an application framework layer (application framework), an Zhuoyun rows (Android run) and a system library, and a kernel layer (kernel), respectively.
The application layer may include a series of application packages. The application layer includes applications such as alarm clocks, cameras, phones, videos, music, gallery, calendars, maps, navigation, bluetooth, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layers may include, for example, an input management service (input manager service, IMS), a display policy service, a power management service (power manager service, PMS), a display management service (display manager service, DMS), an activity manager, a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer includes, for example, display drivers, camera drivers, audio drivers, sensor drivers, etc., which are not limited in any way by the embodiments of the present application.
In some embodiments, as shown in fig. 6, the hierarchy of the electronic device includes an application layer, a hardware abstraction layer (Hardware Abstraction Layer, HAL), a kernel layer, a sensor hub layer, and a hardware layer. The following describes an example of an electronic device as an out-folded mobile phone shown in fig. 1.
In the embodiment of the application program layer, the application program layer comprises a camera application, a face detection algorithm is preset in the camera application, and the face detection algorithm is used for identifying whether a person face exists in an image acquired by a front/rear camera.
In this embodiment of the present application, the hardware abstraction layer includes Motion hal (may also be referred to as a gesture recognition module), touch Panel (TP) hal, screen hal, and camera) hal.
In some embodiments, a flip detection algorithm is preset in Motion hal, and the flip detection algorithm is used to identify whether the user is flipping the mobile phone, such as from a primary screen to the user, to a secondary screen to the user, etc. In some embodiments, motion hal may also preset other recognition algorithms, such as recognizing whether the mobile phone is picked up, recognizing gesture gestures of the user holding the mobile phone, and so on.
In some embodiments, a TP hold detection algorithm is preset in TP hal, and the TP hold detection algorithm mainly determines a state of a user holding a mobile phone by detecting a contact condition between a palm of the user and a screen, including, for example, a holding state of a main screen facing the user and a holding state of a sub screen facing the user. For example, the user holds the folded external folding mobile phone, the main screen of the external folding mobile phone faces the user, at this time, the palm of the user contacts the auxiliary screen of the external folding mobile phone in a large area, by acquiring detection data (i.e., TP data, such as capacitance value) of the touch panels on the main screen and the auxiliary screen, based on the detection data, the contact area between the palm of the user and the main/auxiliary screen is determined, if the contact area between the palm of the user and the auxiliary screen is greater than the preset area, the state that the user holds the mobile phone is determined to be the state that the main screen holds the mobile phone towards the user, or if the contact area between the palm of the user and the main screen is greater than the preset area, the state that the user holds the mobile phone is determined to be the state that the auxiliary screen holds the mobile phone towards the user.
For example, fig. 7 is a schematic layout diagram of a touch panel on an out-folded mobile phone according to an embodiment of the present application, as shown in fig. 7, a touch panel 7011 is disposed on a main screen 701 of the out-folded mobile phone, and a touch panel 7021 is disposed on a sub screen 702. When the user holds the folded external folding screen mobile phone, the palm contacts the middle lower area of the back screen of the mobile phone, for example, when the back screen of the mobile phone is the secondary screen 702, referring to fig. 7, the palm contacts the area 7012 of the primary screen 701; for another example, when the back screen of the mobile phone is the main screen 701, referring to fig. 7, the palm contacts the region 7022 of the sub-screen 702.
In some embodiments, TP hal may determine the state of the user holding the phone based on the TP hold detection algorithm after receiving detection data acquired from the touch panel 7011 and the touch panel 7021 by the TP drive.
In some embodiments, the screen hal may inform the screen driver to turn on or off the primary screen and turn on or off the secondary screen according to instructions sent by the camera application.
In some embodiments, the camera hal may notify to turn on or off the front camera and turn on or off the rear camera according to instructions sent by the camera application.
In the embodiment of the application, the kernel layer comprises a TP driver, a screen driver, a camera driver and the like.
In some embodiments, TP drives a touch panel for driving a cell phone, e.g., driving the touch panel 7011, 7021 of the out-folded cell phone shown in fig. 7. The screen driver is used to drive one or more screen displays of the handset, such as driving the main screen 101 or the sub-screen 102 display of the out-folded screen handset shown in fig. 1. The camera drives one or more cameras for driving the cell phone, for example, the front camera 103 or the rear camera module 104 of the out-folded cell phone shown in fig. 1.
In some embodiments, sensorhub is used to implement centralized control of the sensors to reduce the load on the CPU. The sensor hub corresponds to a micro-program controller (Microprogrammed Control Unit, MCU), and the MCU can run a program for driving a plurality of sensors to work, that is, the sensor hub can support the capability of mounting a plurality of sensors, and can be used as an independent chip, placed between a CPU and various sensors, and also can be integrated in an application processor (application processor, AP) in the CPU.
In the embodiment of the present application, the sensor hub includes a Motion drive, a sensor drive, and the like, and the sensor drive includes, for example, an acceleration sensor drive, a gyro sensor drive, and the like, which operate on the sensor hub.
In some embodiments, the sensor may control the acceleration sensor to work through the acceleration sensor drive, and after receiving the ACC data reported by the acceleration sensor, the sensor may report the ACC data to the Motion hal through the Motion drive. Similarly, the sensor hub can control the work of the gyro sensor through the driving of the gyro sensor, and after receiving the gyro data reported by the gyro sensor, the sensor hub can report the gyro data to the Motion hal through the Motion driving.
In some embodiments, the Motion driver notifies the sensor hub to periodically report sensor data according to a request sent by the Motion hal to subscribe to sensor data (e.g., acceleration sensor data, gyroscope sensor data), and the Motion driver may report sensor data collected by the sensor hub to the Motion hal in order for the Motion hal to detect, for example, whether the cell phone has a flip, etc.
In the embodiment of the application, the hardware layer includes an acceleration sensor, one or more screens, one or more touch panels, and one or more cameras.
The layers in the hierarchical structure of the electronic device shown in fig. 6 and the modules or components included in each layer do not constitute a specific limitation on the electronic device. In other embodiments, the electronic device may include more or fewer layers than shown, and more or fewer components may be included in each layer, as not limited in this application.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be implemented independently or combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
In some embodiments, for the front-mounted self-timer switching rear-mounted self-timer scene shown in fig. 2, when the mobile phone is detected to turn over, the out-folded screen mobile phone closes the front-mounted camera, starts the rear-mounted camera, simultaneously closes the main screen display, starts the auxiliary screen display, and the auxiliary screen displays the picture acquired by the rear-mounted camera. In an example, the out-turned screen mobile phone collects ACC data of an acceleration sensor and gyro data of a gyro sensor, recognizes the pose of the mobile phone according to the ACC data and the gyro data, and determines whether the mobile phone turns over or not.
In this embodiment, whether the mobile phone is turned over is detected to trigger the front-rear camera to switch and the main screen to switch and the auxiliary screen to switch, but the situation of inaccurate identification exists in the practical application only by detecting the turning of the mobile phone, so that the camera and the screen are switched. For example, when a user hands over a cell phone to another person for shooting, the user does not want the device to flip over to trigger a switch between the camera and the screen.
In order to solve the above-mentioned inaccurate recognition situation, an embodiment of the present application provides a screen display method, for a front-mounted self-timer switching rear-mounted self-timer scene shown in fig. 2, as shown in fig. 8, the screen display method includes: when the mobile phone is detected to be overturned, the front camera is closed, the rear camera is started, the rear face detection is started, whether the face of the person exists in the picture acquired by the rear camera is identified, if the face of the person is identified, the auxiliary screen display is started, and the main screen display is closed. Or if the face of the person is not recognized, closing the rear camera and opening the front camera.
The post-face detection shown in fig. 8 may be set for a preset period of time during which the post-face detection is continued, and the preset period of time may be set to, for example, 1s. The external folding screen mobile phone starts the rear face detection, and whether the face of the person exists in the picture acquired by the rear camera or not is identified within the preset time length:
in one example, a person's face is identified for a preset duration, a secondary screen display is started, and a primary screen display is closed.
In one example, no person's face is recognized for a preset period of time, the rear camera is turned off, and the front camera is turned on.
In one example, before the face of the person is identified within the preset time period, the mobile phone is detected again to turn over, which indicates that the mobile phone turns back to the original state, the rear camera is turned off, and the front camera is turned on. In the latter two examples no primary-secondary screen switching occurs.
Based on the screen display method shown in fig. 8, in some embodiments, when the mobile phone is detected to be turned over, the out-turned mobile phone starts the main screen anti-false touch function, so that the main screen anti-false touch before the main screen is closed can be avoided. In some embodiments, when the folded-out screen mobile phone recognizes the face of the person, the main screen display is turned off, and then the main screen anti-false touch function can be released, or the main screen anti-false touch function can be exited. In some embodiments, when the out-folded screen mobile phone does not recognize the face of the person or detects the mobile phone to turn over again before recognizing the face of the person, the rear camera is turned off, and the front camera is turned on again, i.e. the out-folded screen mobile phone switches back to the front self-timer, and the out-folded screen mobile phone can release the main screen anti-false touch function.
In the embodiment shown in fig. 8, only when the mobile phone is detected to turn over and the rear camera recognizes the face of the person, the display of the secondary screen is truly switched to display, so that the rear preview self-timer is realized, and the error switching of the equipment to the camera and the screen can be reduced.
In some embodiments, for the front-end self-timer switching rear-end self-timer scene shown in fig. 2, as shown in fig. 9, the screen display method includes: when the mobile phone is detected to be overturned, the front-facing camera of the out-folded screen mobile phone is kept on, meanwhile, front-facing face detection is started, whether faces, the number of faces, the size of the faces and the like exist in pictures acquired by the front-facing camera or not is recognized, if any one of the first conditions is met, auxiliary screen display is started, the rear-facing camera is started, and the main screen and the front-facing camera are closed. Therefore, the overturn detection and the front face detection are combined, so that the accuracy of equipment identification is improved, and the error switching of equipment to a camera and a screen is reduced.
Wherein the first condition comprises: no person face is identified; recognizing faces of people, wherein the number of the faces is larger than a preset number (such as larger than 1); if the face of the person is recognized, and the proportion of the face size occupying the picture is smaller than the preset proportion.
In the embodiment shown in fig. 9, when the mobile phone is detected to turn over, the mobile phone turns on the rear camera, and the front camera is kept turned on for front face detection, the screen and the camera are switched only when no face is detected, or the number of faces is large, or the size of faces is small, so that the rear preview self-timer is realized, and the error switching of the equipment to the camera and the screen can be reduced.
It can be understood that before the user turns over the mobile phone, the face of the user occupies a larger proportion in the shooting picture collected by the front camera, after the user turns over the mobile phone, in some scenes, the front camera may collect the faces of other users, and in general, the proportion of the faces of other users occupying the shooting picture is smaller, or the faces of a plurality of users are in the picture, so that the user can be considered to turn over the mobile phone. Therefore, the front face detection can assist the equipment to identify the user intention, and accuracy of equipment identification is improved.
In some embodiments, on the basis of overturn detection and front face detection, the accuracy of equipment identification can be further improved by combining rear face detection.
In some embodiments, as shown in fig. 10 a, the user holds the folding mobile phone with the main screen 101 facing the user, the user uses the rear camera module 104 on the back of the main screen 101 to perform rear shooting, the shooting picture is a scene picture around the user, and the main screen 101 displays the scene picture, where the sub-screen 102 is in the screen-off state. The user then turns the handset over, as shown in fig. 10 b, which is an intermediate state of the flip handset. After the mobile phone is turned over, as shown in fig. 10 c, the secondary screen 102 faces the user, the camera module 104 is kept on, the secondary screen 102 displays the picture acquired by the camera module 104, the face of the user is displayed in the picture, and meanwhile, the primary screen 101 is turned off.
For the post-shooting switch to a post-self-shooting scene shown in fig. 10, in some embodiments, as shown in fig. 11, the screen display method includes: when the mobile phone is detected to be overturned, the rear camera of the outward folding screen mobile phone is kept on, meanwhile, the rear face detection is started, whether the face of the person exists in the picture collected by the rear camera is identified, if the face of the person is identified, the auxiliary screen display is started, and the main screen display is closed. Or if the face of the person is not recognized, maintaining the main screen display, and not starting the auxiliary screen display, namely, not switching the main screen and the auxiliary screen. In the process, the front camera is not triggered to be started.
In the embodiment shown in fig. 11, only when the mobile phone is detected to turn over and the rear camera recognizes the face of the person, the display of the secondary screen is truly switched to realize the rear preview self-timer, so that the error switching of the device to the screen can be reduced.
The screen display methods shown in fig. 8, fig. 9 and fig. 11 are all combined with overturn detection and face detection to determine whether to switch front and rear cameras and switch primary and secondary screen display, and although the methods can improve the accuracy of equipment identification, the switching speed of the equipment is low due to the fact that the rear cameras are powered on, face detection and the like, and the photographing experience of a user is affected.
In order to improve the switching speed of equipment, the embodiment of the application also provides a screen display method, which is based on the original overturn detection and the post face detection, and introduces the touch panel TP to hold the detection to identify whether a user holds the main screen or the auxiliary screen after overturning the mobile phone. Specifically, after the fact that the user turns over the mobile phone is detected, the TP holding state of the current user is synchronously obtained, if the current user holds the mobile phone as a main screen, the fact that the auxiliary screen faces the user at the moment is indicated, face detection of the rear camera can be skipped, the auxiliary screen can be directly switched to display a self-timer picture acquired by the rear camera, and the switching speed of equipment is improved; if the current TP detection data cannot determine the holding state of the user, the original scheme can be used, and after the rear camera recognizes the face of the person, the secondary screen display is switched.
For the front-end self-timer switching rear-end self-timer scene shown in fig. 2, in some embodiments, as shown in fig. 12, the screen display method includes: when the mobile phone with the external folding screen detects that the mobile phone is turned over, the front camera is closed, the rear camera is started, detection data of the touch panels (such as the touch panels 7011 and 7021 shown in fig. 7) of the mobile phone with the external folding screen are obtained, and the state that the user holds the mobile phone is determined according to the detection data. If the user is determined to hold the main screen according to the detection data, the display of the auxiliary screen is directly switched, and the display of the main screen is closed. Or if the main screen or the auxiliary screen cannot be held according to the detection data, starting the rear face detection, if the face of the person exists in the picture acquired by the rear camera, starting the auxiliary screen display, and closing the main screen display; or if the face of the person does not exist in the picture acquired by the rear camera, closing the rear camera and opening the front camera.
The post-face detection shown in fig. 12 may be preset for a period of time in which the post-face detection is continued, and the period of time may be set to, for example, 1s. The external folding screen mobile phone starts the rear face detection, and whether the face of the person exists in the picture acquired by the rear camera or not is identified within the preset time length: in one example, a person's face is identified for a preset duration, a secondary screen display is started, and a primary screen display is closed. In one example, no person's face is recognized for a preset period of time, the rear camera is turned off, and the front camera is turned on. In one example, before the face of the person is identified within the preset time period, the mobile phone is detected again to turn over, which indicates that the mobile phone turns back to the original state, the rear camera is turned off, and the front camera is turned on. The latter two examples do not occur a primary-secondary screen switch.
In any one of the above-mentioned screen display methods, when the folded-out screen mobile phone starts the secondary screen display, a touch panel on the secondary screen needs to be started to detect a touch operation of a subsequent user on the secondary screen. When the external folding screen mobile phone closes the main screen display, the touch panel on the main screen is closed, so that the power consumption of the equipment is reduced.
Fig. 13 is a schematic flow chart of a screen display method according to an embodiment of the present application. The flow of internal execution of the electronic device will be described with reference to the hierarchical structure of the electronic device shown in fig. 6. As shown in fig. 13, the screen display method includes:
s1301, receiving a first operation of a user, and sending a subscription request to a Motion hal of a kernel layer by a camera application.
The subscription request is used for requesting to subscribe detection data related to the rollover event, and the detection data related to the rollover event comprises ACC data and TP holding detection data.
The first operation may be an operation in which the user turns on the "flip phone post self-timer" function. For example, the first operation may be a click operation of the switch 3021 by the user on the home screen display interface 302 shown in fig. 3, and after the user clicks the switch 3021, the camera application sends a subscription request to Motion hal.
After the Motion hal receives the subscription request, S1302a and S1303a are performed.
S130a.motion hal subscribes to the sensor for ACC data of the acceleration sensor and gyro data of the gyro sensor.
In one example, motion hal may carry frequency information of reporting data transmission (e.g., 100Hz, i.e., 100 times per second of reporting ACC data and gyro data) in a message subscribing to ACC data and gyro data to instruct a sensor to collect ACC data and gyro data according to the frequency information and report the ACC data and gyro data to Motion hal.
S1303b.motion hal subscribes to TP hold detection with the TP hal of the kernel layer.
The above-described S1302a and S1303a may be performed sequentially or simultaneously, and the present embodiment is not limited in any way.
After the sensor hub receives the subscription ACC data, it may perform:
s1305.sensor hub notifies the acceleration sensor and gyro sensor to turn on.
In one example, the sensor may carry frequency information (e.g., 100 Hz) for reporting ACC data in a message informing the acceleration sensor to turn on, so as to instruct the acceleration sensor to collect ACC data according to the frequency information and report the ACC data to the sensor.
In one example, the sensor may carry frequency information (e.g., 100 Hz) reporting the gyro data in a message informing the gyro sensor to turn on, to instruct the gyro sensor to collect the gyro data according to the frequency information and report to the sensor.
In some embodiments, after the sensor hub receives the subscription ACC data and the gyro data, it may further perform:
s1302b.senorhub sends notification to Motion hal that the subscription to ACC data and gyro data was successful to indicate that the senorhub has subscribed to ACC data and gyro data.
In some embodiments, after receiving the subscribed TP hold detection, TP hal may further perform:
And S1303b, the TP hal sends a notice of successful subscription to the Motion hal to indicate that the TP hal has started TP grip detection.
In some embodiments, after the Motion hal completes subscribing to ACC data and gyro data, and subscribing to TP hold detection, it may further perform:
and S1304, sending a notice of successful subscription to the camera application by the Motion hal to indicate that the Motion hal has subscribed to detection data related to the rollover event.
After the acceleration sensor and the gyro sensor are turned on, it is possible to perform:
s1306a. the acceleration sensor and the gyro sensor periodically transmit ACC data, gyro data to Motion hal through the sensor hub, respectively.
And S1306b.motion hal performs overturn detection according to the received ACC data and the received gyro data to obtain an overturn detection result.
The Motion hal determines whether the user turns over the mobile phone according to the received ACC data and the received gyro data and a preset turning over detection algorithm, and the turning over detection result is used for indicating that the mobile phone turns over or the mobile phone does not turn over. Specifically, motion hal performs data fusion on received 100Hz ACC data and gyro data, calculates the pose of the mobile phone in real time, and detects whether the pose change meets the turning feature or not, so as to determine whether the mobile phone turns or not. Whether the pose change meets the overturning characteristic can be detected through a pre-trained model, the input of the model can be data of the pose change, and the output of the model is an overturning detection result.
After TP hal receives the subscribed TP hold detection, it may perform:
and S1307a.TP hal obtains TP data from TP through TP driving of the kernel layer.
And S1307b, carrying out TP holding detection by TP hal according to the obtained TP data to obtain a TP holding detection result.
And S1307c, the TP hal sends a TP holding detection result to the Motion hal.
For example, as shown in fig. 2 a, before the user turns over the mobile phone, the main screen of the out-folded mobile phone displays the self-timer picture acquired by the front camera, and at this time, TP on the main screen is turned on, so that the TP on the main screen can be used to detect the touch operation of the user on the main screen, for example, the user clicks the photographing button 1012 on the main screen. If the user turns on the "reverse mobile phone post self-timer" function and the Motion hal subscribes to the TP to hold and detect, then if the user turns over the mobile phone, as shown in fig. 2 b, the TP hal can acquire TP data on the main screen through TP driving, and can be used for detecting whether the user holds the main screen. When the mobile phone is turned from the main screen to the user and vice screen to the user, as shown in fig. 2 c, TP hal can determine a TP holding detection result according to a preset TP holding detection algorithm and TP data on the main screen. It can be understood that, compared with the area of contact between the palm of the user and the main screen before turning over, the area of contact between the palm of the user and the main screen can be determined by the TPhal according to the TP holding detection algorithm and the TP data on the main screen, and if the area of contact between the palm of the user and the main screen is greater than the preset area, the TP holding detection result can be determined to be the holding state of the auxiliary screen facing the user.
Similarly, the TP hal can determine the contact area between the palm of the user and the auxiliary screen according to the TP holding detection algorithm and the TP data on the auxiliary screen, and if the contact area between the palm of the user and the auxiliary screen is greater than the preset area, it can determine that the TP holding detection result is the holding state of the main screen to the user.
The above-described S1306a to S1306b are executed simultaneously with S1307a to S1307c, and step numbers do not represent sequential execution.
In some embodiments, motion hal may perform after obtaining the rollover detection result:
and S1306c.motion hal starts a timer and waits for a preset time period. Wherein the length of the timer is a preset duration.
In a possible implementation manner, when the timer expires and the Motion hal does not receive the TP hold detection result sent by the TP hal, the method may be performed:
s428. motion hal sends a first subscription response to the camera application, the first subscription response comprising the flip detection result.
In some embodiments, if the flip detection result indicates that the mobile phone is flipped, the following steps may be performed:
s1309, the camera application starts the rear camera and closes the front camera.
S1310, the camera application starts the rear face detection to detect whether the face of the person exists in the picture acquired by the rear camera.
After the camera is applied to start the rear camera, whether the face of the person exists in the picture acquired by the rear camera can be detected according to a preset face detection algorithm. If the camera application detects a person' S face, S1311a is performed; alternatively, if the camera application does not detect a person' S face, S1311b is performed.
S1311a, the camera application switches the secondary screen to display the picture acquired by the rear camera.
S1311b, closing the rear camera and opening the front camera.
In this embodiment, if the TP hal does not detect the holding state of the user, the TP holding detection result will not be sent to the Motion hal, and after the camera application knows that the mobile phone is turned over, it can be combined with face detection to determine whether to switch the auxiliary screen display and then to post-auto-photograph, so as to reduce the situation that the equipment is not turned over and detected accurately.
In a possible implementation manner, before the timer expires, the Motion hal receives a TP hold detection result sent by the TP hal (S1307 c), and may perform:
s428. motion hal sends a second subscription response to the camera application, the second subscription response comprising the flip detection result and the TP hold detection result.
If the overturn detection result indicates that the mobile phone is overturned and the TP holding detection result indicates that the auxiliary screen is held by the user, the method can be implemented:
s1311, the camera application starts a rear camera, closes a front camera, starts a secondary screen display, and closes a main screen display.
According to the embodiment, the camera application acquires the overturning detection result and the TP holding detection result from the Motion hal, if the two detection results meet the preset requirements, for example, the mobile phone overturns and the overturned auxiliary screen faces to the user to hold, the camera application can cut the screen rapidly, face detection is not needed, and therefore the screen cutting speed of the equipment is improved.
Based on the above embodiments, the embodiments of the present application provide a screen display method, which is applied to an electronic device having a foldable screen, where the foldable screen can be folded outwards, and when the foldable screen is in a folded state, the foldable screen is divided into a main screen and a sub-screen, a first camera is located on a front surface of the main screen, and a second camera is located on a back surface of the main screen, as shown in fig. 14, and the method includes:
s1401, at a first moment, a foldable screen is in a folded state, a main screen faces a user, a picture acquired by a first camera or a second camera is displayed on the main screen, and a secondary screen is in a screen-off state;
s1402, at a second moment, the user starts to overturn the electronic equipment, when the electronic equipment is overturned to the state that the auxiliary screen faces the user, the auxiliary screen displays the picture acquired by the second camera, and the main screen is in a screen-extinguishing state.
Wherein the second time is later than the first time.
For example, referring to fig. 2, the electronic device with a foldable screen is an out-folded screen mobile phone, and when the out-folded screen mobile phone is in a folded state, the screen of the out-folded screen mobile phone is divided into a main screen and a sub-screen. The first camera is a front camera and is arranged on the main screen, such as the camera 103 shown in fig. 2; the second camera is a rear camera and is disposed at the back of the main screen, such as the camera module 104 shown in fig. 2.
The first moment the picture that the first camera gathered is displayed on the main screen, after the user turns over the mobile phone, the picture that the secondary screen gathered of second camera is displayed, namely the user uses the front-end camera to self-shoot at the first moment, after the mobile phone is turned over, the user uses the rear-end camera to self-shoot, and the front-end self-shoot is switched to the rear-end self-shoot.
In another case, the picture collected by the second camera is displayed on the main screen at the first moment, after the user turns over the mobile phone, the picture collected by the second camera is displayed on the auxiliary screen, namely, the user shoots a scene by using the rear camera at the first moment, and after the mobile phone is turned over, the user self-timer by using the rear camera, so that the rear shooting is switched and the rear self-timer is realized.
In this embodiment, the user can realize the rear-mounted self-timer function through the upset cell-phone, and the auxiliary screen can show the self-timer picture that rear-mounted camera gathered promptly, can promote user's experience of shooing.
In an alternative embodiment, the method further comprises: and responding to a first operation of the user, wherein the first operation is used for triggering the second camera to conduct preview self-timer, the main screen plays a first animation, and the first animation is used for guiding the user to turn over the electronic equipment.
In one example, the first operation is an interface touch operation. For example, referring to fig. 3, the first operation may be a click operation of the control 3013 on the interface 301 by the user, in response to which the home screen starts playing the guide animation. For another example, referring to fig. 4, the first operation may be a click operation of the icon 4011 on the interface 401 by the user, and in response to the operation, the home screen starts playing the first guide animation.
In another example, the first operation is a voice control operation, for example, after the user speaks a "post self-timer" voice command, the main screen starts playing the guide animation.
In an alternative embodiment, the method further comprises: responding to the operation of starting a first photographing switch by a user, and starting a first photographing function by the electronic equipment; the first photographing function is used for starting the second camera to conduct preview self-timer after the electronic equipment is turned over. For example, referring to fig. 3, the first photographing switch may be the switch 3021 on the interface 302 shown in fig. 3, and the first photographing function is a "flip phone post self-timer" function.
In one example, the electronic device turns on the flip detection in response to a user's operation to turn on the first photographing switch.
In one example, in response to a user's operation to turn on the first photographing switch, the electronic device turns on the flip detection and the touch panel grip detection.
In this embodiment, the user may control the first photographing function to be turned on or off through the switch.
In an optional embodiment, in a case that the frame acquired by the first camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to be overturned, the electronic equipment closes the first camera and opens the second camera; the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
In this embodiment, a user performs front self-timer on a main screen at a first moment, when the electronic device is detected to turn over, the electronic device is switched from a front camera to a rear camera, and when the face of a person is detected in a picture acquired by the rear camera, the electronic device is switched from main screen display to auxiliary screen display. The front-mounted self-timer switching and the rear-mounted self-timer are realized by combining the overturn detection and the rear-mounted face detection.
In an optional embodiment, in a case that the frame acquired by the first camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to be overturned, the electronic equipment closes the first camera and opens the second camera; the electronic equipment determines the state of holding the screen by the user according to the detection data of the touch panel; when the user holds the main screen, the electronic equipment starts the auxiliary screen and closes the main screen.
In this embodiment, when the user performs front-end self-timer on the main screen and detects that the electronic device is turned over, the electronic device is switched from the front-end camera to the rear-end camera, and if the user is determined to hold the main screen based on the detection data of the touch panel, that is, the auxiliary screen is determined to face the user, face detection is not required, and display of the main screen is directly switched to display of the auxiliary screen. Through the combination of overturn detection and touch panel detection, the front-end self-timer is switched to the rear-end self-timer, and the switching speed of the screen is faster because no face detection is needed, so that the photographing experience of a user can be improved.
In an alternative embodiment, the method includes: when the state that the user holds the screen cannot be determined, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
In this embodiment, the user performs front self-timer on the main screen at the first moment, detects that the electronic device is turned over, and if the user cannot be determined to hold the main screen or the auxiliary screen based on the detection data of the touch panel, further combines the rear face detection to determine whether the face of the person exists in the picture acquired by the rear camera, so that when the face of the person is detected, the auxiliary screen display is switched.
In an alternative embodiment, the method further comprises: when the face of the person in the picture acquired by the second camera is not detected, the electronic equipment starts the first camera and closes the second camera.
In this embodiment, if the electronic device is detected to be turned over, but the rear camera does not detect the face, the front camera can be turned on again, the rear camera is turned off, at this time, the screen is not switched, and the main screen displays the picture acquired by the front camera, that is, the front self-timer still remains. In this way, the case of screen switching due to the user's false rollover can be avoided.
In an optional embodiment, in a case that the screen acquired by the second camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to overturn, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
When the detection equipment is detected to be overturned, the second camera is kept on.
In this embodiment, a user performs rear scene shooting on a main screen at a first moment, when the electronic device is detected to overturn, a rear camera of the electronic device is kept on, and when a face of a person is detected in a picture acquired by the rear camera, the electronic device is switched from main screen display to auxiliary screen display. By combining the overturn detection and the rear face detection, the rear shooting switching rear self-timer is realized.
In an optional embodiment, in a case that the screen acquired by the second camera is displayed on the main screen at the first moment, the method further includes: when the electronic equipment is detected to overturn, the electronic equipment determines the state of holding the screen by a user according to the detection data of the touch panel; when the user holds the main screen, the electronic equipment starts the auxiliary screen and closes the main screen.
When the detection equipment is detected to be overturned, the second camera is kept on.
In this embodiment, a user performs rear scene shooting on a main screen at a first moment, and when it is detected that an electronic device is turned over, the electronic device keeps a rear camera on, and if it is determined that the user holds the main screen based on detection data of a touch panel, that is, it is determined that a sub-screen faces the user, face detection is not required, and display of the main screen is directly switched to display of the sub-screen. Through the combination of overturn detection and touch panel detection, the rear-end shooting is switched and the rear-end self-timer is realized, and the switching speed of the screen is faster because no face detection is needed, so that the shooting experience of a user can be improved.
In an alternative embodiment, the method includes: when the state that the user holds the screen cannot be determined, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera; when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment opens the auxiliary screen and closes the main screen.
In this embodiment, a user performs rear scene shooting on a main screen at a first moment, detects that an electronic device turns over, and if the user cannot be determined to hold the main screen or the auxiliary screen based on detection data of a touch panel, the rear face detection can be further combined to determine whether a face of a person exists in a picture acquired by a rear camera, so that when the face of the person is detected, the auxiliary screen display is switched.
In an alternative embodiment, the electronic device opens the secondary screen, including: the electronic equipment starts the auxiliary screen and a touch panel on the auxiliary screen; the electronic device closes the main screen, including: the electronic device closes the main screen and the touch panel on the main screen.
In this embodiment, the secondary screen is turned on and the touch panel on the secondary screen is turned on at the same time, so that the subsequent touch panel detects the touch operation of the user on the secondary screen. And closing the touch panel on the main screen while closing the main screen so as to reduce the power consumption of the equipment.
Note that, the embodiment of the present application is not particularly limited to the specific structure of the execution body of one screen display method, as long as it can be processed by one screen display method provided according to the embodiment of the present application by running the code storing one screen display method of the embodiment of the present application. For example, the execution body of a screen display method provided in the embodiment of the present application may be a functional module in an electronic device that can call a program and execute the program, or a processing apparatus applied in the electronic device, for example, the processing apparatus is a chip.
In the above embodiments, the "module" may be a software program, a hardware circuit, or a combination of both that implements the above functions. The hardware circuitry may include application specific integrated circuits (application specific integrated circuit, ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions.
Thus, the modules of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application also provides electronic equipment, which comprises: a foldable screen, which is divided into a main screen and a sub-screen when the foldable screen is in a folded state; a touch sensor, an acceleration sensor, and a gyro sensor; a memory and at least one processor; the foldable screen, the touch sensor, the acceleration sensor, the gyroscope sensor and the memory are connected with the processor; the memory is for storing computer program code, the computer program code comprising instructions; when at least one processor executes the instructions, the electronic device is caused to execute the technical solution according to any of the foregoing embodiments, and the implementation principle and technical effects are similar, which are not described herein again.
The memory may be, but is not limited to, read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, but may also be electrically erasable programmable read-only memory (EEPROM), compact disc-read only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory may be stand alone and be coupled to the processor via a communication line. The memory may also be integrated with the processor.
The processor may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application Specific Integrated Circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in accordance with aspects of the present application.
The embodiment of the present application provides a computer readable storage medium, on which a computer program is stored, when the computer program runs on an electronic device, the electronic device executes the technical solution of any one of the embodiments, and the implementation principle and the technical effect are similar to those of the related embodiments, which are not repeated herein.
The embodiment of the present application provides a chip, which includes a processor, and the processor is configured to invoke a computer program in a memory to execute the technical solution in any of the above embodiments, and the implementation principle and technical effects are similar to those of the above related embodiments, which are not repeated herein.
The embodiments of the present application provide a computer program product, when the computer program product runs on an electronic device, to enable the electronic device to execute the technical solution in any one of the embodiments, and the implementation principle and technical effects are similar to those of the related embodiments, which are not repeated herein.
The foregoing detailed description of the invention has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of illustration and description only, and is not intended to limit the scope of the invention.

Claims (13)

1. A screen display method, applied to an electronic device having a foldable screen capable of being folded outward, the foldable screen being divided into a main screen and a sub-screen when the foldable screen is in a folded state, a first camera being located at a front surface of the main screen and a second camera being located at a rear surface of the main screen, the method comprising:
at a first moment, the foldable screen is in a folded state, the main screen faces a user, the main screen displays pictures acquired by the first camera or the second camera, and the auxiliary screen is in a screen-off state;
at a second moment, the user starts to turn over the electronic equipment, and when the electronic equipment turns over to the state that the auxiliary screen faces the user, the auxiliary screen displays the picture acquired by the second camera, and the main screen is in a screen-off state; the second time is later than the first time.
2. The method according to claim 1, wherein the method further comprises:
and responding to a first operation of a user, wherein the first operation is used for triggering the second camera to conduct preview self-timer, the main screen plays a first animation, and the first animation is used for guiding the user to turn over the electronic equipment.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
responding to the operation of starting a first photographing switch by a user, and starting the first photographing function by the electronic equipment; the first photographing function is used for starting the second camera to conduct preview self-timer after the electronic equipment is turned over.
4. A method according to any one of claims 1 to 3, wherein in the case where the screen captured by the first camera is displayed on the main screen at the first time, the method further comprises:
when the electronic equipment is detected to be overturned, the electronic equipment closes the first camera and opens the second camera;
the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera;
when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment starts the auxiliary screen and closes the main screen.
5. A method according to any one of claims 1 to 3, wherein in the case where the screen captured by the first camera is displayed on the main screen at the first time, the method further comprises:
when the electronic equipment is detected to be overturned, the electronic equipment closes the first camera and opens the second camera;
The electronic equipment determines the state of holding the screen by a user according to the detection data of the touch panel;
when the state that the user holds the screen is that the user holds the main screen, the electronic equipment opens the auxiliary screen and closes the main screen.
6. The method according to claim 5, characterized in that the method comprises: when the state that the user holds the screen cannot be determined, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera;
when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment starts the auxiliary screen and closes the main screen.
7. The method according to claim 4 or 6, characterized in that the method further comprises:
when the face of the person in the picture acquired by the second camera is not detected, the electronic equipment starts the first camera and closes the second camera.
8. A method according to any one of claims 1 to 3, wherein in the case where the screen captured by the second camera is displayed on the main screen at the first time, the method further comprises: when the electronic equipment is detected to overturn, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera;
When detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment starts the auxiliary screen and closes the main screen.
9. A method according to any one of claims 1 to 3, wherein in the case where the screen captured by the second camera is displayed on the main screen at the first time, the method further comprises: when the electronic equipment is detected to overturn, the electronic equipment determines the state of holding the screen by a user according to the detection data of the touch panel;
when the state that the user holds the screen is that the user holds the main screen, the electronic equipment opens the auxiliary screen and closes the main screen.
10. The method according to claim 9, characterized in that the method comprises: when the state that the user holds the screen cannot be determined, the electronic equipment detects whether the face of the person exists in the picture acquired by the second camera;
when detecting that the face of the person exists in the picture acquired by the second camera, the electronic equipment starts the auxiliary screen and closes the main screen.
11. The method according to any one of claims 4 to 10, wherein,
the electronic device opens the secondary screen, including:
the electronic equipment starts the auxiliary screen and a touch panel on the auxiliary screen;
The electronic device closing the home screen, comprising:
and the electronic equipment closes the main screen and a touch panel on the main screen.
12. An electronic device, comprising:
a foldable screen divided into a main screen and a sub screen when the foldable screen is in a folded state;
a touch sensor, an acceleration sensor, and a gyro sensor;
a memory and at least one processor;
the foldable screen, the touch sensor, the acceleration sensor, the gyroscope sensor and the memory are connected with the processor;
the memory is for storing computer program code, the computer program code comprising instructions; the instructions, when executed by the at least one processor, cause the electronic device to perform the method of any one of claims 1 to 11.
13. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 11.
CN202310750090.5A 2023-06-21 2023-06-21 Screen display method, device and storage medium Pending CN117714568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310750090.5A CN117714568A (en) 2023-06-21 2023-06-21 Screen display method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310750090.5A CN117714568A (en) 2023-06-21 2023-06-21 Screen display method, device and storage medium

Publications (1)

Publication Number Publication Date
CN117714568A true CN117714568A (en) 2024-03-15

Family

ID=90152194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310750090.5A Pending CN117714568A (en) 2023-06-21 2023-06-21 Screen display method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117714568A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019227281A1 (en) * 2018-05-28 2019-12-05 华为技术有限公司 Capture method and electronic device
US20200177714A1 (en) * 2018-12-04 2020-06-04 Samsung Electronics Co., Ltd. Electronic device for performing operation based on status information thereof and operating method thereof
WO2020155876A1 (en) * 2019-01-31 2020-08-06 华为技术有限公司 Screen display control method and electronic device
WO2021006388A1 (en) * 2019-07-10 2021-01-14 엘지전자 주식회사 Mobile terminal and electronic device including mobile terminal
CN114257670A (en) * 2022-02-28 2022-03-29 荣耀终端有限公司 Display method of electronic equipment with folding screen
KR20220055230A (en) * 2020-10-26 2022-05-03 삼성전자주식회사 Method for Taking pictures using a plurality of Cameras and Device thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019227281A1 (en) * 2018-05-28 2019-12-05 华为技术有限公司 Capture method and electronic device
US20200177714A1 (en) * 2018-12-04 2020-06-04 Samsung Electronics Co., Ltd. Electronic device for performing operation based on status information thereof and operating method thereof
WO2020155876A1 (en) * 2019-01-31 2020-08-06 华为技术有限公司 Screen display control method and electronic device
WO2021006388A1 (en) * 2019-07-10 2021-01-14 엘지전자 주식회사 Mobile terminal and electronic device including mobile terminal
KR20220055230A (en) * 2020-10-26 2022-05-03 삼성전자주식회사 Method for Taking pictures using a plurality of Cameras and Device thereof
CN114257670A (en) * 2022-02-28 2022-03-29 荣耀终端有限公司 Display method of electronic equipment with folding screen

Similar Documents

Publication Publication Date Title
CN114679537B (en) Shooting method and terminal
US11785329B2 (en) Camera switching method for terminal, and terminal
CN109981839B9 (en) Display method of electronic equipment with flexible screen and electronic equipment
CN110381282B (en) Video call display method applied to electronic equipment and related device
US11669242B2 (en) Screenshot method and electronic device
CN110536004B (en) Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment
CN111443884A (en) Screen projection method and device and electronic equipment
CN110602315B (en) Electronic device with foldable screen, display method and computer-readable storage medium
CN111183632A (en) Image capturing method and electronic device
CN112887583B (en) Shooting method and electronic equipment
CN112671976B (en) Control method and device of electronic equipment, electronic equipment and storage medium
WO2020029306A1 (en) Image capture method and electronic device
CN114650363A (en) Image display method and electronic equipment
CN113935898A (en) Image processing method, system, electronic device and computer readable storage medium
CN113535284A (en) Full-screen display method and device and electronic equipment
CN115589051B (en) Charging method and terminal equipment
CN110784592A (en) Biological identification method and electronic equipment
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
CN115147451A (en) Target tracking method and device thereof
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN112584037B (en) Method for saving image and electronic equipment
CN114222020B (en) Position relation identification method and device and readable storage medium
WO2023207667A1 (en) Display method, vehicle, and electronic device
CN116152814A (en) Image recognition method and related equipment
CN115641867A (en) Voice processing method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination