CN114257671B - Image display method and electronic equipment - Google Patents

Image display method and electronic equipment Download PDF

Info

Publication number
CN114257671B
CN114257671B CN202210185763.2A CN202210185763A CN114257671B CN 114257671 B CN114257671 B CN 114257671B CN 202210185763 A CN202210185763 A CN 202210185763A CN 114257671 B CN114257671 B CN 114257671B
Authority
CN
China
Prior art keywords
screen
camera
layer
angle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210185763.2A
Other languages
Chinese (zh)
Other versions
CN114257671A (en
Inventor
冯帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210185763.2A priority Critical patent/CN114257671B/en
Priority to CN202210806906.7A priority patent/CN116723257A/en
Publication of CN114257671A publication Critical patent/CN114257671A/en
Application granted granted Critical
Publication of CN114257671B publication Critical patent/CN114257671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • H04M1/0268Details of the structure or mounting of specific components for a display module assembly including a flexible display panel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

An image display method and electronic equipment relate to the technical field of terminals and can improve the display sending efficiency of a target camera in the process of switching a folding screen of the electronic equipment. The method is applied to electronic equipment with a folding screen, the electronic equipment comprises an inner screen and an outer screen, the inner screen comprises a first screen and a second screen, the electronic equipment comprises a first camera and a second camera, the first camera is arranged below the inner screen, the second camera is arranged below the outer screen, the electronic equipment further comprises an application program framework layer and a first layer, and the first layer is positioned between the application program framework layer and a hardware unit of the electronic equipment, and the method comprises the following steps: the first layer determines an included angle between the first screen and the second screen; under the condition that the included angle between the first screen and the second screen reaches a preset angle range, the first layer triggers images collected by target cameras in the first camera and the second camera to be sent to display.

Description

Image display method and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to an image display method and an electronic device.
Background
With the continuous development of electronic equipment and display screens and the improvement of the living demands of people, electronic equipment with a folding function of the display screen is produced. At present, a camera function is a common function provided by electronic equipment, and a user can perform video call, photograph or video recording and the like through the electronic equipment. The inner and outer screen cameras of the electronic equipment with the foldable screen are dynamically switched along with the folding and unfolding of the screen. For example, when an inner screen of the electronic device displays and previews an image, the electronic device previews and acquires the image by using the inner screen front-facing camera, and when an outer screen of the electronic device displays and previews the image, the electronic device previews and acquires the image by using the outer screen front-facing camera. However, in the process of switching the inner screen and the outer screen of the electronic equipment, the switching rate of the inner screen and the outer screen cameras is low, so that the screen switching process is not smooth, the pictures are not consistent, and the like.
Disclosure of Invention
The embodiment of the application provides an image display method and electronic equipment, wherein the image display method is applied to the electronic equipment with a folding screen, the electronic equipment comprises an inner screen and an outer screen, the inner screen comprises a first screen and a second screen, and when the included angle between the first screen and the second screen is determined to reach a preset angle range through a hardware abstraction layer or an inner core layer of the electronic equipment, the hardware abstraction layer or the inner core layer directly triggers an image collected by a target camera to be sent and displayed, so that a cross-service/component interaction flow is reduced, decision-making time is saved, and the switching efficiency of a front camera of the inner screen or the outer screen is improved. In addition, when the hardware abstraction layer or the kernel layer determines that the included angle between the first screen and the second screen meets the angle threshold value in the switching preparation stage, the hardware abstraction layer or the kernel layer controls the target camera to perform camera initialization operation (such as power-on, register sequence configuration and the like), so that the time for the camera to send and display images is reduced, the camera switching efficiency is improved, and the user experience is improved.
In order to achieve the purpose, the following technical scheme is adopted in the application:
in a first aspect, an embodiment of the present application provides an image display method, which is applied to an electronic device with a foldable screen, where the electronic device includes an inner screen and an outer screen, the inner screen includes a first screen and a second screen, the electronic device includes a first camera and a second camera, the first camera is disposed below the inner screen, the second camera is disposed below the outer screen, the electronic device further includes an application framework layer and a first layer, and the first layer is located between the application framework layer and a hardware unit of the electronic device, and the method includes: the first layer determines an included angle between the first screen and the second screen; under the condition that the included angle between the first screen and the second screen reaches a preset angle range, the first layer triggers images collected by target cameras in the first camera and the second camera to be sent to display.
Wherein the first layer may comprise a hardware abstraction layer and/or a kernel layer. The method comprises the steps that a first layer is assumed to be a hardware abstraction layer, the hardware abstraction layer acquires information collected by a sensor sent by a kernel layer, the hardware abstraction layer determines an included angle between a first screen and a second screen, and when the hardware abstraction layer determines that the included angle between the first screen and the second screen reaches a certain preset angle range, the hardware abstraction layer triggers images collected by target cameras in the first camera and the second camera to be sent and displayed. The method comprises the steps that a first layer is assumed to be an inner core layer, the inner core layer determines an included angle between a first screen and a second screen according to information collected by a sensor, and when the inner core layer determines that the included angle between the first screen and the second screen reaches a certain preset angle range, the inner core layer triggers images collected by target cameras in the first camera and the second camera to be sent to a display.
The first camera may be a front camera of the inner screen, and the second camera may be a front camera of the outer screen.
In the embodiment of the application, when the first layer determines that the included angle between the first screen and the second screen is within the preset angle range, the first layer below the application program framework layer directly triggers the image acquired by the target camera to be sent and displayed, so that the cross-service/component interaction process is reduced, and the decision time is saved.
In a possible implementation manner, before the first layer triggers the display of the images acquired by the target camera in the first camera and the second camera, the method further includes: the first layer acquires a screen state identifier of the electronic equipment; the first layer determines a target screen presenting the image from the inner screen and the outer screen according to the screen state identifier; under the condition that the contained angle between first screen and the second screen reaches preset angle scope, the image that the first layer triggered the target camera collection in first camera and the second camera is sent and is shown, include: under the condition that the included angle between the first screen and the second screen reaches a preset angle range, the first layer triggers images collected by target cameras in the first camera and the second camera to be displayed on the target screens.
The screen state identifier is used for indicating that the electronic equipment is switched from the inner screen to the outer screen currently, or the electronic equipment is switched from the outer screen to the inner screen currently.
In the embodiment of the application, the electronic equipment determines the target screen for presenting the image through the screen state identifier, presents the image collected by the target camera on the target screen under the condition that the included angle between the first screen and the second screen meets the angle requirement, can avoid the situation that the image sent and displayed by the camera is not matched with the target screen as far as possible, and can improve the user experience to a certain extent.
In an embodiment, after the first layer obtains the screen state identifier of the electronic device, if the first layer determines that the electronic device is switched from the inner screen to the outer screen according to the screen state identifier, the first layer triggers an image acquired by the target camera to be presented on the outer screen under the condition that an included angle between the first screen and the second screen reaches a preset angle range. If the first layer determines that the electronic equipment is switched from the outer screen to the inner screen according to the screen state identification, the first layer triggers the image collected by the target camera to be displayed on the inner screen under the condition that the included angle between the first screen and the second screen reaches the preset angle range. In another possible implementation manner, in the process that the folding screen is switched from the unfolded state to the folded state, under the condition that an included angle between the first screen and the second screen reaches a preset angle range, the first layer triggers an image collected by a target camera in the first camera and the second camera to be presented on the target screen, including: and under the condition that the included angle between the first screen and the second screen is smaller than a first preset angle and the image is presented by the outer screen, the first layer triggers the image collected by the second camera to be presented on the outer screen.
It can be understood that, in folding screen by the unfolding state switch fold condition in-process, the first floor is according to the contained angle between first screen and the second screen, and the image that the second camera was gathered is confirmed to trigger before the outer screen appears to the image that the first floor confirmed the electronic equipment and has been switched to the outer screen via the inner screen to when having avoided the first floor to trigger the image that the second camera was gathered and sent to show, the outer screen was not lighted, leads to the unable problem that in time send the demonstration of the image that the second camera was gathered.
In another possible implementation manner, in the process that the folding screen is switched from the folding state to the unfolding state, under the condition that an included angle between the first screen and the second screen reaches a preset angle range, the first layer triggers an image collected by a target camera in the first camera and the second camera to be presented on the target screen, including: and under the condition that the included angle between the first screen and the second screen is larger than a second preset angle and the image is presented by the inner screen, the first layer triggers the image collected by the first camera to be presented on the inner screen.
The folding screen is switched to the unfolding state in the process of the folding screen from the folding state, the first layer determines that the electronic equipment is switched to the inner screen from the outer screen before the image collected by the first camera is displayed on the inner screen according to the included angle between the first screen and the second screen, and therefore the problem that the image collected by the first camera cannot be timely displayed due to the fact that the inner screen is not lightened when the image collected by the first camera is triggered to be displayed by the first layer is avoided.
In another possible implementation manner, before the first layer triggers the image acquired by the second camera to be presented on the outer screen, the image display method may further include: and under the condition that the first layer determines that the included angle between the first screen and the second screen is smaller than a third preset angle, the first layer triggers the second camera to carry out camera initialization operation, wherein the third preset angle is larger than the first preset angle.
It can be understood that, in the process of switching the folding screen from the unfolded state to the folded state, an included angle between the first screen and the second screen is gradually decreased, and when the first layer determines that the included angle between the first screen and the second screen is smaller than a third preset angle, the first layer triggers the second camera to perform camera initialization operation, for example, the first layer triggers the second camera to perform operations such as powering on and configuring a register sequence. When the first layer determines that the included angle between the first screen and the second screen is smaller than a first preset angle, the first layer can directly trigger the image collected by the second camera to be displayed on the outer screen. Therefore, the first layer can control the image collected by the initialized second camera to be sent and displayed, and the efficiency of sending and displaying the image by the second camera is improved.
In another possible implementation manner, before the first layer triggers the image acquired by the first camera to be presented on the inner screen, the method further includes: when the first layer determines that the included angle between the first screen and the second screen is larger than a fourth preset angle, the first layer triggers the first camera to carry out camera initialization operation, wherein the fourth preset angle is smaller than the second preset angle.
It can be understood that, in the process that the folding screen is switched from the folded state to the unfolded state, an included angle between the first screen and the second screen is gradually increased, and when the first layer determines that the included angle between the first screen and the second screen is greater than a fourth preset angle, the first layer triggers the first camera to perform camera initialization operation, for example, the first layer triggers the first camera to perform operations such as powering on and configuring a register sequence. When the first layer determines that the included angle between the first screen and the second screen is larger than a second preset angle, the first layer can directly trigger the image collected by the first camera to be displayed on the inner screen. Therefore, the first layer can control the image collected by the initialized first camera to be directly sent to the display, and the image sending and displaying efficiency of the first camera is improved.
In another possible implementation manner, before the first layer triggers the image acquired by the second camera to be presented on the outer screen, the image display method may further include: the method comprises the steps that an application program framework layer obtains an included angle between a first screen and a second screen; under the condition that the included angle between the first screen and the second screen is smaller than a fifth preset angle, the application program framework layer triggers the external screen to carry out screen initialization operation; under the condition that the included angle between the first screen and the second screen is smaller than a sixth preset angle, the application program framework triggers the electronic equipment to switch the inner screen into the outer screen, and determines that the screen state identifier of the electronic equipment is a first identifier, wherein the first identifier is used for indicating that the outer screen presents a display image, and the sixth preset angle is smaller than the fifth preset angle.
The screen initialization operation comprises operations of powering on the screen, adjusting display parameters and the like.
In the embodiment of the application, the folding screen is switched from the unfolding state to the folding state, before the outer screen presents the image, the application framework layer can trigger the outer screen to perform screen initialization operation according to the included angle between the first screen and the second screen, and when the application framework layer determines that the included angle between the first screen and the second screen is smaller than a sixth preset angle, the application framework layer directly triggers the outer screen to light, so that the screen initialization process is avoided, and the screen switching efficiency is improved.
In addition, the application framework layer marks the screen state according to the screen state of the folding screen, for example, when the application framework layer determines that the image is presented by the outer screen, the application framework layer marks the screen state identifier as a first identifier, and when the application framework layer determines that the image is presented by the inner screen, the application framework layer marks the screen state identifier as a second identifier. Therefore, when the first layer triggers the image collected by the target camera to be presented on the inner screen or the outer screen, the first layer can acquire the screen state identifier from the application program framework layer so as to determine the target screen for presenting the image from the inner screen or the outer screen according to the screen state identifier.
In another possible implementation manner, before the first layer triggers the image acquired by the first camera to be presented on the inner screen, the image display method may further include: the method comprises the steps that an application program framework layer obtains an included angle between a first screen and a second screen; under the condition that the included angle between the first screen and the second screen is larger than a seventh preset angle, the application program framework layer triggers the inner screen to carry out screen initialization operation; and under the condition that the included angle between the first screen and the second screen is larger than an eighth preset angle, triggering the electronic equipment by the application program framework to switch the outer screen into the outer screen, and determining that the screen state identifier of the electronic equipment is a second identifier, wherein the second identifier is used for indicating that the inner screen presents a display image, and the eighth preset angle is larger than a seventh preset angle.
In another possible implementation manner, the first layer is a hardware abstraction layer, the electronic device further includes a kernel layer, the kernel layer is located between the hardware abstraction layer and a hardware unit of the electronic device, and the first layer determines an included angle between the first screen and the second screen, including: the hardware abstraction layer acquires information acquired by a sensor through the kernel layer, wherein the sensor comprises at least one of an angle sensor, a gyroscope sensor and an acceleration sensor; and the hardware abstraction layer determines an included angle between the first screen and the second screen according to the information acquired by the sensor.
In the embodiment of the application, after the sensor driver of the kernel layer acquires the information acquired by the sensor, the sensor driver sends the information acquired by the sensor to the hardware abstraction layer, and the hardware abstraction layer determines the included angle between the first screen and the second screen according to the information acquired by the sensor. Compared with the prior art, the information acquired by the sensor is uploaded to the application program framework layer through the hardware abstraction layer by the kernel layer, the included angle between the first screen and the second screen is determined by the application program framework layer according to the information acquired by the sensor, and the problem of information transmission across the components exists.
In another possible implementation manner, the first layer is an inner core layer, and the first layer determines an included angle between the first screen and the second screen, including: the inner core layer acquires information acquired by a sensor, wherein the sensor comprises at least one of an angle sensor, a gyroscope sensor and an acceleration sensor; and the inner core layer determines an included angle between the first screen and the second screen according to the information acquired by the sensor.
In the embodiment of the application, after the sensor of the kernel layer is driven to acquire the information acquired by the sensor, the kernel layer directly determines the included angle between the first screen and the second screen according to the information acquired by the sensor. Therefore, the inner core layer does not need to interact with the upper layer, the included angle between the first screen and the second screen can be directly calculated, and the calculation efficiency is improved.
In another possible implementation manner, the inner screen displays an image acquired by the first camera under the condition that the electronic equipment is in a video call scene and the folding screen is in an unfolded state; or the outer screen displays the image collected by the second camera under the condition that the folding screen is in the folding state.
In a second aspect, the present application provides an electronic device having a foldable screen, the electronic device including an inner screen and an outer screen, the inner screen including a first screen and a second screen, the electronic device including a first camera and a second camera, the first camera being disposed below the inner screen, the second camera being disposed below the outer screen, the electronic device further including an application framework layer and a first layer, the first layer being located between the application framework layer and a hardware unit of the electronic device. After the first layer is used for determining the included angle between the first screen and the second screen, under the condition that the included angle between the first screen and the second screen reaches the preset angle range, the first layer triggers the images collected by the target cameras in the first camera and the second camera to be sent to display.
In a third aspect, the present application provides an electronic device, comprising: the foldable screen comprises at least two screens; one or more processors; a memory; wherein one or more computer programs are stored in the memory, the one or more computer programs comprising instructions which, when executed by the electronic device, cause the electronic device to perform the image display method as defined in any of the above first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the image display method according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the image display method according to any of the first aspect.
It is to be understood that the electronic device according to the second and third aspects, the computer storage medium according to the fourth aspect, and the computer program product according to the fifth aspect are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Drawings
Fig. 1 is a schematic view of a folding screen mobile phone provided in an embodiment of the present application when folded;
fig. 2 is a schematic view of a foldable screen mobile phone according to an embodiment of the present disclosure when unfolded;
fig. 3 is a first flowchart illustrating an image display method according to an embodiment of the present disclosure;
fig. 4 is a schematic view of an application scenario of an image display method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a principle of calculating an included angle between a screen a and a screen B according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an example of a geographic coordinate system provided by an embodiment of the present application;
fig. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 9 is a flowchart illustrating a second image display method according to an embodiment of the present disclosure;
fig. 10 is a third schematic flowchart of an image display method according to an embodiment of the present application;
fig. 11 is a schematic view of an application scenario of an image display method according to an embodiment of the present application;
fig. 12 is a schematic flowchart of a fourth image display method according to an embodiment of the present disclosure;
fig. 13 is a schematic view of an application scenario of an image display method according to an embodiment of the present application;
fig. 14 is a fifth flowchart illustrating an image display method according to an embodiment of the present application;
fig. 15 is a schematic flowchart illustrating a sixth image display method according to an embodiment of the present application;
fig. 16 is a seventh flowchart illustrating an image display method according to an embodiment of the present disclosure;
fig. 17 is a schematic view of an application scenario of an image display method according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Wherein in the description of the embodiments of the present application, "/" indicates an inclusive meaning, for example, a/B may indicate a or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The electronic equipment in the embodiment of the application comprises the foldable screen, the foldable screen can be unfolded or folded along the folding axis, and different display areas can be presented in different states. Taking the electronic device as a folding screen mobile phone as an example, it is described that the electronic device presents different display areas in different states.
Fig. 1 is a schematic view of a folding screen mobile phone provided in an embodiment of the present application when folded. As shown in fig. 1 (a), when the folding-screen mobile phone is in a fully folded state and the outer-screen camera 1 is turned on, an image captured by the outer-screen camera 1 is displayed in a preview manner by the outer screen 11. As shown in fig. 1 (b), when the folding-screen mobile phone is in a folded state and the outer-screen rear camera 2 is turned on, an image acquired by the outer-screen rear camera 2 is displayed in a preview manner by the outer screen 11.
Fig. 2 is a schematic view of a folding screen mobile phone provided in an embodiment of the present application when unfolded. The unfolding process of the folding screen mobile phone is shown in fig. 2 (a) to 2 (b). As shown in fig. 2 (b), when the folding-screen mobile phone is in the unfolded state and the inner-screen front camera 3 is turned on, an image captured by the inner-screen front camera 3 may be displayed by a large-screen preview composed of the first screen 21 and the second screen 22, or may be displayed by only the first screen 21 or the second screen 22, which is not limited herein. When the folding screen mobile phone in fig. 2 is in a fully unfolded state, the screen angle θ of the mobile phone may be 180 ° or approximately 180 °, so that large-screen display can be achieved, richer information can be provided for a user, and better use experience can be brought to the user. The screen angle θ of the mobile phone is an included angle between the first screen 21 and the second screen 22 (or a plane where the first screen 21 and the second screen 22 are located). As shown in fig. 2 (c), the foldable screen presents an outer screen 23 as viewed from the outside of the foldable-screen handset. When the folding screen mobile phone is in the unfolded state and the outer screen front camera 1 is opened, an image acquired by the outer screen front camera 1 is previewed and displayed by the outer screen 23. The external screen 23 may be disposed on the back of the first screen 21 or the second screen 22, which is not limited herein.
It should be noted that the included angle θ between the first screen 21 and the second screen 22 may be in a range of [0 °, 180 ° ]. In the embodiment of the present application, if θ is in the range of [0 °, X ], it may be determined that the folding-screen mobile phone is in a folded state, and if θ is in the range of (X, 180 ° ], it may be determined that the folding-screen mobile phone is in an unfolded state, where X is a preset angle threshold value, X may be set in the electronic device by a user, or may be determined by the electronic device according to a usage habit of the user, a value of X is not limited in the embodiment of the present application, for example, X may be set to 90 °, if an angle θ between the first screen 21 and the second screen 22 is smaller than or equal to 90 °, it is determined that the folding-screen mobile phone is in a folded state, and if an angle θ between the first screen 21 and the second screen 22 is greater than 90 °, it is determined that the folding-screen mobile phone is in a folded state, of course, X may also be set to other values, such as X may be set to 80 °, 85 °, 95 °, and so on, and is not limited herein.
In some embodiments, when the foldable screen of the electronic device is in the folded state and the user uses the outer screen front camera for a video call or shooting an image, the electronic device uses the outer screen front camera to capture the image and displays a preview image captured by the outer screen front camera by the outer screen. If the electronic equipment responds to the operation of manually opening the folding screen by a user, and the folding screen is switched from the folding state to the unfolding state, the electronic equipment acquires images by using the front camera of the inner screen, and displays the preview images acquired by the front camera of the inner screen by the inner screen. Therefore, when the electronic equipment is in a video call or a photographing scene, the camera of the electronic equipment is switched along with the screen state, the process that the image is collected by the front camera of the outer screen and is switched to the image is collected by the front camera of the inner screen is realized, and the process that the preview image is displayed by the outer screen and is switched to the preview image displayed by the inner screen is realized.
Similarly, when the folding screen of the electronic device is in the unfolded state and the user uses the inner screen front camera to perform video call or shoot images, the electronic device uses the inner screen front camera to collect images and the inner screen displays preview images collected by the inner screen front camera. If the electronic equipment responds to the operation of manually folding the folding screen by a user, and the folding screen is switched from the unfolding state to the folding state, the electronic equipment uses the front camera of the outer screen to collect images, and the outer screen displays preview images collected by the front camera of the outer screen. Therefore, when the electronic equipment is in a video call or a photographing scene, the camera of the electronic equipment is switched along with the screen state, the process that the image is collected by the front camera of the inner screen and is switched to the process that the image is collected by the front camera of the outer screen is realized, and the process that the preview image is displayed by the inner screen and is switched to the preview image displayed by the outer screen is realized.
The following describes a related switching process of the external screen front-facing camera and the internal screen front-facing camera in the screen folding or unfolding process in the related art.
In the related art, after an application framework layer of an electronic device acquires information acquired by at least one sensor, the application framework layer determines a screen state of a folding screen according to the information acquired by the at least one sensor. And the application program framework layer determines a target camera for sending and displaying the image according to the screen state of the folding screen. The application program framework layer sends the target camera information to a Hardware Abstraction Layer (HAL), and the hardware abstraction layer triggers an image collected by the target camera to be displayed on a target display screen. And the target display screen is a display screen corresponding to the screen state of the folding screen. For example, the screen state of the folding screen is the unfolded state, and the target display screen is a large display screen formed by the first screen and the second screen of the folding screen in the unfolded state.
For example, in the following description, referring to fig. 3, taking a scene in which a WeChat application program is adopted to perform a video call or shoot an image as an example, a switching process of an outer screen front camera and an inner screen front camera in a folding or unfolding process of a folding screen mobile phone in the related art is described. As shown in fig. 3, in response to the operation of the user, the microcomputer in the mobile phone drives the sensor to obtain data collected by the at least one sensor in real time or periodically when the front camera is used for video call or image shooting. The sensor driver sends data collected by at least one sensor to the display decision module. The display decision module determines an included angle between a first screen and a second screen of the folding screen according to data collected by at least one sensor. Then, the display decision module determines the screen state of the folding screen according to an included angle between a first screen and a second screen of the electronic device and an angle threshold. And the display decision module generates the screen state of the folding screen to the camera decision module. And the camera decision module determines a target camera for sending and displaying images according to the screen state of the folding screen. The camera decision module sends the camera identification of the target camera to the camera hardware abstraction module, and the camera hardware abstraction module controls the corresponding camera to drive and trigger the target camera to display the image according to the camera identification of the target camera.
As an example, assuming that the target camera is an inner screen front-facing camera, the camera hardware abstraction module may send an identifier of the inner screen front-facing camera to the inner screen front-facing camera driver. Then, the inner screen front-facing camera is driven to carry out operations such as electrifying and the like on the inner screen front-facing camera, and the inner screen front-facing camera is started.
As can be seen from fig. 3, when the electronic device switches the screen and the internal and external screen front cameras display images, the whole execution process involves the kernel layer, the hardware abstraction layer, and the application framework layer of the electronic device, and obviously, the whole execution process in which the camera of the electronic device switches along with the physical state of the screen is long, so that the defect of low video transmission efficiency exists during the switching of the camera. For example, the folding screen of the electronic device is switched to the unfolding state from the folding state, and after the image collected by the front camera of the outer screen is switched to the image collected by the front camera of the inner screen, a preview image collected by the front camera of the outer screen is displayed in the inner screen, or the condition of black screen of the inner screen exists.
For example, as shown in fig. 4 (a), it is assumed that a user uses the electronic device to perform a video call in a folded state, that is, the external screen front camera 1 of the electronic device captures an image of the user, and the external screen of the electronic device displays a preview image of the user captured by the external screen front camera 1. The folding screen of the electronic equipment is switched from the folding state to the unfolding state in response to the user operation, namely the preview image displayed by the outer screen of the electronic equipment is switched to the preview image displayed by the inner screen. As shown in fig. 4 (b), the camera of the electronic device is not successfully switched from the outer screen front camera 1 to the inner screen front camera 3, resulting in no display content in the inner screen of the electronic device.
Therefore, in the method, when the included angle between the first screen and the second screen is determined to meet the angle threshold value in the switching stage by the hardware abstraction layer or the kernel layer, the hardware abstraction layer or the kernel layer directly controls the target camera to switch and send the display, so that the cross-service/component interaction flow is reduced, the decision time is saved, and the switching efficiency of the front camera of the inner screen and the outer screen is improved. In addition, when the hardware abstraction layer or the kernel layer determines that the included angle between the first screen and the second screen of the folding screen reaches the preset angle range, the hardware abstraction layer or the kernel layer controls the target camera to perform switching preparation work (for example, operations such as electrifying and configuring a register sequence), so that the time for the camera to send and display images is reduced, the camera switching efficiency is improved, and the user experience is improved.
For example, the display method of the electronic device provided in the embodiment of the present application may be applied to an electronic device with a foldable screen, such as a mobile phone, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart watch, a netbook, a wearable electronic device, an Augmented Reality (AR) device, a Virtual Reality (VR) device, an in-vehicle device, an intelligent vehicle, and an intelligent audio device, and the embodiment of the present application does not limit the electronic device.
As shown in fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
Wherein the controller may be a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, a bus or Universal Serial Bus (USB) interface, and the like.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). The I2S interface may be used for audio communication. The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
In this embodiment of the application, the processor 110 may control the MIPI interface switch to close the inner screen front-facing camera and open the image showing path of the inner screen front-facing camera, or control the MIPI interface switch to close the outer screen front-facing camera and open the image showing path of the inner screen front-facing camera, thereby achieving the purpose of switching the image showing paths of the cameras.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 141 may be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
Alternatively, the display screen 194 of the electronic device 100 may be a flexible fold-out screen. The flexible folding screen comprises a folding edge made of flexible materials. Part or all of the flexible folding screen is made of flexible materials. The two screens formed by folding the flexible folding screen are a complete screen of an integral structure, and the two screens can be understood as two display areas. Alternatively, the foldable screen of the electronic device 100 may be a multi-screen foldable screen. The multi-screen folding screen may include a plurality (two or more) of screens. The plurality of screens are a plurality of individual display screens. The multiple screens may be connected in turn by a folding shaft. Each screen can rotate around a folding shaft connected with the screen, and folding of the multi-screen folding screen is achieved.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In the embodiment of the present application, the camera 193 may include an outer screen front camera, an inner screen front camera, a rear camera, and the like.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in the external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes. In the embodiment of the present application, the foldable screen of the electronic device 100 can be folded to form a plurality of screens. A gyro sensor 180B may be included in each screen to measure the orientation (i.e., the directional vector of the orientation) of the corresponding screen. The electronic device 100 may determine the included angle between adjacent screens according to the measured angle change of the orientation of each screen.
It should be noted that, in the embodiment of the present application, the electronic device includes a foldable screen, and the foldable screen is divided into a plurality of display areas after the electronic device is folded, where each display area is referred to as a screen. A gyro sensor 180B may be included on each screen to measure the orientation (i.e., the directional vector of the orientation) of the corresponding screen. For example, as shown in fig. 2 (a), the electronic device is folded to form a first screen and a second screen, each of which is provided with a gyro sensor 180B, and the orientations of the first screen and the second screen may be measured, respectively. And the electronic equipment determines the included angle between the first screen and the second screen according to the measured orientation angle change of each screen.
For example, the electronic apparatus 100 is folded to form a first screen (illustration a screen) in which the gyro sensor a is provided and a second screen (illustration B screen) in which the gyro sensor B is provided, as shown in fig. 6. Here, the present embodiment describes a principle in which the gyro sensor a measures the orientation of the a screen (i.e., the directional vector of the orientation), the gyro sensor B measures the orientation of the B screen (i.e., the directional vector of the orientation), and a principle in which the electronic apparatus 100 calculates the angle θ between the a screen and the B screen from the orientation of the a screen and the orientation of the B screen.
Wherein the coordinate system of the gyro sensor is a geographical coordinate system. As shown in fig. 7, the origin O of the geographic coordinate system is located at the point where the vehicle (i.e., the device containing the gyro sensor, such as the electronic device 100) is located, the x-axis points east (E) along the local latitude, the y-axis points north (N) along the local meridian, and the z-axis points upward along the local geographic vertical line, and forms a right-hand rectangular coordinate system with the x-axis and the y-axis. The plane formed by the x axis and the y axis is the local horizontal plane, and the plane formed by the y axis and the z axis is the local meridian plane. Thus, it can be understood that the coordinate system of the gyro sensor is: the gyroscope sensor is taken as an origin O, the east direction along the local latitude line is taken as an x axis, the north direction along the local meridian line is taken as a y axis, and the upward direction along the local geographical vertical line (namely the opposite direction of the geographical vertical line) is taken as a z axis.
The electronic apparatus 100 can measure the direction vector of the orientation of each screen in the coordinate system of the gyro sensor provided therein, using the gyro sensor 180B provided in each screen. For example, referring to the side view of the electronic device as shown in fig. 6, the directional vector of the orientation of the a screen measured by the electronic device in the coordinate system of the gyro sensor a is vector z1, and the directional vector of the orientation of the B screen in the coordinate system of the gyro sensor B is vector z 2. The electronic apparatus 100 can calculate the angle α between the vector z1 and the vector z2 by using the formula (1). Wherein, the formula (1) is as follows:
Figure 50549DEST_PATH_IMAGE001
as can be seen from fig. 6, since the vector z1 is perpendicular to the a screen and the vector z2 is perpendicular to the B screen, the angle θ =180 ° - α between the a screen and the B screen can be obtained. That is, the electronic device determines the angle θ between the a screen and the B screen according to the measured direction vector of the orientation of the a screen in the coordinate system of the gyro sensor a (i.e., the vector z 1) and the direction vector of the orientation of the B screen in the coordinate system of the gyro sensor B (i.e., the vector z 2).
It should be noted that although the positions of the gyro sensors disposed in the a screen and the B screen do not overlap, that is, the origins of the coordinate systems of the gyro sensors of the a screen and the B screen do not overlap, the x-axis, the y-axis, and the z-axis of the two coordinate systems are parallel, so that the coordinate systems of the gyro sensors disposed in the a screen and the B screen can be considered to be parallel. Thus, although the vector z1 and the vector z2 are not in the same coordinate system, the angle α between the vector z1 and the vector z2 can be calculated by the above equation (1) because the axes of the two coordinate systems are parallel.
In some embodiments, the angle θ between the a screen and the B screen can also be measured by one or more other sensors. For example, one acceleration sensor 180E may be provided in each screen of the folding screen. The electronic device 100 (e.g., the processor 110) may measure the motion acceleration of each screen when being rotated by using the acceleration sensor; and then calculating the rotation angle of one screen relative to the other screen according to the measured motion acceleration, namely the included angle theta between the screen A and the screen B.
In other embodiments, the gyro sensor 180B may be a virtual gyro sensor formed by combining other sensors. The virtual gyroscope sensor can be used for calculating the included angle between adjacent screens of the folding screen, namely the included angle theta between the screen A and the screen B.
The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic apparatus 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocking and locking the screen.
The ambient light sensor 180L is used to sense ambient light brightness. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on. The temperature sensor 180J is used to detect temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the invention takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of an electronic device.
Fig. 8 is another schematic structural diagram of an electronic device according to an embodiment of the present application.
It will be appreciated that the hierarchical architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may include an application layer (referred to as the application layer for short) and an application framework layer (referred to as the framework layer for short), a hardware abstraction layer, and a kernel layer.
The application layer may include a series of application packages.
As shown in fig. 8, the application package may include system applications. The system application refers to an application that is set in the electronic device before the electronic device is shipped from the factory. Exemplary system applications may include programs for cameras, galleries, calendars, music, short messages, and conversations.
The application package may also include third party applications, which refer to applications installed by the user after downloading the installation package from an application store (or application marketplace). For example, map applications (e.g., hundred degree map, Gade map, etc.), takeaway applications (e.g., Mei Tu, hungry, etc.), reading applications (e.g., electronic book), social applications (e.g., West Xin), and trip applications (e.g., dripping knocking) and the like.
The application framework layer provides an Application Programming Interface (API) and a programming framework for an application of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 8, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a display decision module, a camera decision module, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephone manager is used for providing a communication function of the electronic equipment. Such as management of call status (including connection, hangup, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the mobile phone vibrates, and an indicator light flickers.
The display decision module is used for determining whether to switch the screen display of the electronic equipment according to the angle of the current screen of the electronic equipment.
The camera decision module is used for determining whether to switch the inner and outer screen cameras of the electronic equipment to acquire images according to the angle of the current screen of the electronic equipment.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The HAL is a generic interface that performs a layer of encapsulation on the underlying hardware drivers and provides the calling drivers to the application framework layer. In an embodiment of the present application, the hardware abstraction layer may include a sensor hardware abstraction module and a camera hardware abstraction module.
The sensor hardware abstraction module is used for determining an included angle between a first screen and a second screen of the electronic device according to information collected by the sensor after receiving the information collected by the sensor, and sending the included angle between the first screen and the second screen to the camera hardware abstraction module and/or the display decision module.
The camera hardware abstraction module is used for controlling the target camera to carry out camera initialization operation when determining that an included angle between a first screen and a second screen of the electronic equipment meets an angle threshold value in a switching preparation stage, and triggering the target camera which has completed initialization operation to send a display image when determining that the included angle between the first screen and the second screen of the electronic equipment meets the threshold value in the switching stage.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In some embodiments, the sensor driver may be configured to determine an included angle between a first screen and a second screen of the electronic device according to information collected by the sensor after acquiring the information collected by the sensor, and send the included angle between the first screen and the second screen to the camera driver.
The camera drive can be used for controlling the target camera to carry out camera initialization operation when determining that the included angle between the first screen and the second screen of the electronic equipment meets the angle threshold value in the switching preparation stage, and triggering the target camera which has completed initialization operation to send a display image when determining that the included angle between the first screen and the second screen of the electronic equipment meets the threshold value in the switching stage.
In some embodiments, the camera drive may include an outer screen front camera drive and an inner screen front camera drive, wherein the outer screen front camera drive may be used to drive the outer screen front camera and the inner screen front camera drive may be used to drive the inner screen front camera. In addition, the camera drive may further include a rear camera drive.
For convenience of understanding, in the following embodiments of the present application, the electronic device shown in fig. 5 and fig. 8 is taken as an example, and the electronic device is taken as a folding screen mobile phone, and the display method of the electronic device provided in the embodiments of the present application is specifically described with reference to the drawings.
In the embodiment of the application, when the electronic device determines that the included angle between the first screen and the second screen of the folding screen reaches the preset angle range through the hardware abstraction layer, the hardware abstraction layer directly triggers the image collected by the target camera to be sent and displayed. In addition, when the hardware abstraction layer determines that an included angle between the first screen and the second screen of the folding screen meets an angle threshold value in a switching preparation stage, the hardware abstraction layer controls the target camera to perform initialization operation (for example, power-on, register sequence configuration and the like).
For example, as shown in fig. 9, a camera application in the mobile phone uses a front-facing camera to shoot an image in response to a shooting operation (for example, clicking or touching) of the user, or a social contact application in the mobile phone (for example, microcomputer) uses the front-facing camera to perform a video call or shoot an image in response to the operation of the user. The sensor driver of the inner core layer can acquire data collected by at least one sensor in real time or periodically. The sensor driver sends data collected by at least one sensor to the sensor hardware abstraction module. The sensor hardware abstraction module determines an included angle between a first screen and a second screen of the folding screen according to data collected by at least one sensor, and sends the angle to the camera hardware abstraction module. And the camera hardware abstraction module determines a target camera for transmitting and displaying images according to an included angle between the first screen and the second screen. The camera hardware abstraction module sends the camera identification of the target camera to the camera driver, and the camera driver controls the corresponding camera driver to trigger the image collected by the target camera to be sent and displayed according to the camera identification of the target camera.
In addition, after the camera hardware abstraction module acquires the included angle between the first screen and the second screen, when the camera hardware abstraction module determines that the included angle between the first screen and the second screen meets the angle threshold value of the camera switching preparation stage, the camera hardware abstraction module can control the target camera to perform camera initialization operation. The camera initialization operation comprises operations of powering on, configuring a register sequence and the like. Therefore, when the camera hardware abstraction module determines that the target camera is adopted to send and display images, the initialized image display collected by the target camera is directly triggered, the camera preparation work is saved, and the sending and displaying rate of the camera is improved.
In a possible scene, when the mobile phone responds to user operation and adopts the front-facing camera of the outer screen to shoot images, the folding screen of the mobile phone is switched to the unfolding state from the folding state, and after the sensor hardware abstraction module determines an included angle between the first screen and the second screen of the folding screen according to the sensor information, the angle of the current screen of the mobile phone is sent to the camera hardware abstraction module. The camera hardware abstraction module can determine the current physical state of the folding screen of the mobile phone according to the included angle between the first screen and the second screen of the folding screen. For example, the camera hardware abstraction module may determine whether the folding screen of the mobile phone is switched from the folding state to the unfolding state. When the camera hardware abstraction module determines that the included angle between the first screen and the second screen meets the angle threshold value of the switching preparation stage, camera initialization operation can be carried out on the front camera of the inner screen. For example, the camera hardware abstraction module may perform initialization operations such as powering on and parameter configuration on the inner screen camera. When the camera hardware abstraction module determines that the included angle between the first screen and the second screen meets the angle threshold value of the switching stage, the internal screen front-facing camera drives the image collected by the internal screen front-facing camera to be directly driven to be displayed. The specific implementation process may be exemplarily refer to the implementation process of fig. 10 as follows.
The above process is explained in detail with reference to fig. 10, and fig. 10 is a schematic flowchart of a third image display method according to an embodiment of the present application.
As shown in fig. 10, the display method may include the steps of:
step 801, the sensor driver acquires data acquired by the sensor.
In this application embodiment, the sensor drive can acquire the data of at least one sensor collection such as angle sensor, acceleration sensor, gyroscope sensor in real time.
For example, the mobile phone may measure the motion acceleration of each screen of the mobile phone when the screen is rotated using an acceleration sensor. The mobile phone may measure a change in an orientation angle of each screen of the mobile phone using a gyro sensor.
Step 802, the sensor driver sends data collected by the sensor to the sensor hardware abstraction module.
Step 803, the sensor hardware abstraction module determines an included angle between the first screen and the second screen according to data collected by the sensor.
The process of determining the angle between the first screen and the second screen in step 803 may refer to the description of fig. 6 and 7 above.
In the embodiment of the application, after the sensor hardware abstraction module receives data collected by a sensor sent by a sensor driver, an included angle between a first screen and a second screen of a mobile phone can be determined according to the data collected by the sensor. As an example, assuming that the sensor hardware abstraction module receives data collected by the acceleration sensor and the gyroscope sensor, the sensor hardware abstraction module may determine an included angle between the first screen and the second screen of the mobile phone according to the data collected by the acceleration sensor and the gyroscope sensor.
Step 804, the sensor hardware abstraction module sends the included angle between the first screen and the second screen to the display decision module.
Step 805, the sensor hardware abstraction module sends the included angle between the first screen and the second screen to the camera hardware abstraction module.
In this embodiment of the application, after the sensor hardware abstraction module determines the included angle between the first screen and the second screen, the sensor hardware abstraction module may send the included angle between the first screen and the second screen to the display decision module and the camera hardware abstraction module.
It should be explained that the sensor hardware abstraction module may send the included angle between the first screen and the second screen to the display decision module and the camera hardware abstraction module at the same time, or may send the included angle between the first screen and the second screen to the camera hardware abstraction module after sending the angle between the first screen and the second screen to the display decision module, or may send the included angle between the first screen and the second screen to the camera hardware abstraction module before sending the included angle between the first screen and the second screen to the display decision module. That is, in the embodiment of the present application, the execution sequence of the above steps 804 and 805 is not limited.
Step 806, the display decision module determines that the included angle is greater than the angle threshold 1, initializes the screen, and marks the screen state identifier.
The angle threshold 1 is an angle at which the mobile phone starts to perform pre-switching operation on the screen. The angle threshold 1 may be an angle threshold preset by a user, or an angle value determined by the mobile phone according to the habit of the user using the screen. For example, the angle threshold 1 may be set to 55 °, 60 °, or the like.
In the embodiment of the application, after the display decision module receives the included angle between the first screen and the second screen, the display decision module determines whether the included angle is greater than an angle threshold 1.
In some embodiments, if the display decision module determines that the included angle between the first screen and the second screen is less than or equal to the angle threshold 1, the display decision module does nothing. Namely, the mobile phone continues to adopt the outer screen to display the image collected by the camera.
In other embodiments, if the display decision module determines that the included angle between the first screen and the second screen is greater than the angle threshold 1, the display decision module prepares for screen switching. The display decision module is used for carrying out initialization operations such as electrifying and the like on the inner screen, so that when the mobile phone adopts the inner screen to display image information, the process of initializing the inner screen is avoided, and the image display efficiency is improved.
In an embodiment of the application, the display decision module marks the screen state identifier to mark a screen for displaying the image. Wherein the screen status identifier is used to uniquely identify the screen currently used to display the image. For example, if the display decision module determines that the screen state is the expanded state, that is, the display decision module determines that the screen for displaying the image is the inner screen, the display decision module marks the screen state identifier as 1; if the display decision module determines that the screen state is the folding state, namely the display decision module determines that the screen for displaying the image is the outer screen, the display decision module marks the screen state identifier as 0.
And step 807, the camera hardware abstraction module determines that the included angle is greater than an angle threshold value 2, and pre-switching operation is performed on the front-facing camera of the inner screen.
The angle threshold 2 is an angle at which the mobile phone starts to perform pre-switching operation on the camera. The angle threshold 2 may be the same as or different from the angle threshold 1, and is not limited herein. For example, the angle threshold 2 may be 55 °, 60 °, 65 °, etc.
In the embodiment of the application, the camera hardware abstraction module determines that an included angle between the first screen and the second screen is larger than an angle threshold value 2, and the camera hardware abstraction module triggers the front camera of the inner screen to make switching preparation. For example, the camera hardware abstraction module triggers the front-end camera of the inner screen to execute operations such as power-on and register sequence configuration.
In some embodiments, if the angle threshold 1 is the same as the angle threshold 2, after the sensor hardware abstraction module and the camera hardware abstraction module respectively acquire the included angle between the first screen and the second screen, the mobile phone may synchronously execute the above step 806 and step 807, thereby shortening the time for switching the screens and the cameras, and improving the efficiency for switching the screens and the cameras.
Step 808, the camera hardware abstraction module determines that the included angle is greater than the angle threshold 3.
The angle threshold 3 is an angle for the mobile phone to determine that the external screen front-facing camera is switched to the internal screen front-facing camera to acquire images. The angle threshold 3 is greater than the angle threshold 2. The angle threshold 3 may be an angle threshold preset by the user, or an angle value determined by the mobile phone according to the habit of the user using the screen. For example, the angle threshold 3 may be set to 85 °, 95 °, or the like.
The mobile phone comprises a folding screen, an outer screen, a camera and a camera, wherein the folding screen is arranged on the outer screen, the outer screen is arranged on the outer screen, the camera is arranged on the inner screen, the outer screen is arranged on the outer screen, the camera is arranged on the inner screen, and the outer screen is arranged on the outer screen.
Step 809, the camera hardware abstraction module sends a screen state identification request to the display decision module.
And step 810, the display decision module sends a screen state identifier to the camera hardware abstraction module.
In this embodiment of the application, the display decision module may send, to the camera hardware abstraction module, a screen state identifier corresponding to a screen currently used for displaying an image according to the screen state identifier request, so that the camera hardware abstraction module may determine the screen currently displaying the image according to the received screen state identifier.
And step 811, the camera hardware abstraction module determines that the screen state identifier is an inner screen, determines to adopt a front camera of the inner screen to acquire images, and sends the images to the inner screen for display.
In the embodiment of the application, the camera hardware abstraction module determines that an included angle between the first screen and the second screen is greater than an angle threshold value 3, and determines that the screen state identifier is an inner screen. The camera hardware abstraction module determines to adopt the inner screen front camera to collect images and determines to display the images collected by the inner screen front camera. The inner screen is a large screen formed by the first screen and the second screen.
And step 812, the inner screen front camera drives and controls the inner screen front camera to acquire and display images.
In the embodiment of the application, when the camera hardware abstraction module determines to send and display the preview image collected by the inner screen front-facing camera, and when the camera driver determines to send and display the preview image collected by the inner screen front-facing camera, the inner screen front-facing camera drive controls the inner screen front-facing camera to collect the image, and the inner screen displays the preview image collected by the inner screen front-facing camera.
For example, the drive of the front camera of the inner screen can control the MIPI interface switch to close the image showing channel of the front camera of the outer screen and open the image showing channel of the front camera of the inner screen, so that the aim of switching the image showing channel of the cameras is fulfilled.
Exemplarily, when the folding screen of cell-phone is in fold condition, when the little information in the cell-phone responded to user's operation and carried out video call, the leading camera of outer screen of cell-phone gathered user's image information to show the image information who gathers on the outer screen of cell-phone, see (a) in fig. 11. In the process that the folding screen of the mobile phone is switched from the folding state to the unfolding state, if the display decision module determines that the included angle θ between the first screen and the second screen is smaller than or equal to the angle threshold 1 (for example, 60 °), the outer screen of the mobile phone continues to display the image information collected by the front camera of the outer screen, and the inner screen of the mobile phone does not display the image information, as shown in (b) in fig. 11. And if the display decision module determines that the included angle theta between the first screen and the second screen is larger than the angle threshold value 1, the display decision module makes preparation for screen switching. The camera hardware abstraction module determines that an included angle theta between the first screen and the second screen of the mobile phone is larger than an angle threshold value 2 (for example, 65 degrees), and the camera hardware abstraction module triggers a front camera of the inner screen to perform switching preparation work. The camera hardware abstraction module determines that the included angle θ between the first screen and the second screen is greater than the angle threshold 3 (for example, 85 °), and the camera hardware abstraction module triggers the inner screen front camera to acquire an image, and sends and displays a preview image acquired by the inner screen front camera on the inner screen, see (c) in fig. 11. Therefore, whether the inner screen and the outer screen of the mobile phone are switched or not is determined by the camera hardware abstraction module according to the included angle between the first screen and the second screen, the problem that the inner screen front-mounted camera does not send a display image due to the fact that the mobile phone is switched from an outer screen display image to an inner screen display preview image is solved, the camera hardware abstraction module triggers the inner screen camera to perform initialization operation in advance, and the efficiency of switching the inner screen and the outer screen camera is improved.
Therefore, when the display decision module determines whether to switch the screen according to the included angle between the first screen and the second screen, the camera hardware abstraction module synchronously determines whether to make switching preparation for the front-facing camera of the inner screen according to the included angle between the first screen and the second screen, and determines whether to switch to the front-facing camera of the inner screen. When the camera hardware abstraction module determines to adopt the front-facing camera of the inner screen to send and display images, the inner screen can directly display the images collected by the front-facing camera of the inner screen, so that the problems that the switching time of the camera is longer, the efficiency is low and the like because the front-facing camera of the inner screen starts to carry out initialization operation when the mobile phone determines to adopt the front-facing camera of the inner screen to send and display images in the related technology are solved, and the sending and displaying speed of the camera is improved.
In another possible scene, when the mobile phone responds to user operation and adopts the inner screen front-facing camera to shoot images, in the process that the folding screen of the mobile phone is switched from the unfolding state to the folding state, the sensor hardware abstraction module determines an included angle between the first screen and the second screen according to the sensor information and then sends the included angle between the first screen and the second screen to the camera hardware abstraction module. When the camera hardware abstraction module determines that the included angle between the first screen and the second screen meets the angle threshold value of the switching preparation stage, camera initialization operation can be carried out on the front camera of the outer screen. For example, the camera hardware abstraction module may perform initialization operations such as powering on and parameter configuration on the external screen camera. When the camera hardware abstraction module determines that the included angle between the first screen and the second screen meets the angle threshold value of the switching preparation stage, the external screen front-facing camera directly triggers the image collected by the external screen front-facing camera to be sent and displayed. Specific implementation procedures can be exemplarily seen in the implementation procedure of fig. 12 as follows.
The above process is explained in detail with reference to fig. 12, and fig. 12 is a fourth flowchart of an image display method according to an embodiment of the present application.
As shown in fig. 12, the display method may include the steps of:
step 1001, a sensor driver acquires data acquired by a sensor.
In step 1002, the sensor driver sends data collected by the sensor to the sensor hardware abstraction module.
Step 1003, the sensor hardware abstraction module determines an included angle between the first screen and the second screen according to data collected by the sensor.
In step 1004, the sensor hardware abstraction module sends the angle between the first screen and the second screen to the display decision module.
In step 1005, the sensor hardware abstraction module sends the included angle between the first screen and the second screen to the camera hardware abstraction module.
In this embodiment of the application, the implementation processes of step 1001 to step 1005 may refer to the implementation processes of step 801 to step 805, which are not described herein again.
Step 1006, the display decision module determines that the included angle is smaller than an angle threshold 4, triggers the screen to perform screen initialization operation, and marks a screen state identifier.
The angle threshold 4 is an angle at which the mobile phone starts to trigger the screen to perform screen initialization operation. The angle threshold 4 may be an angle threshold preset by the user, or an angle value determined by the mobile phone according to the habit of the user using the screen. For example, the angle threshold 4 may be set to 85 °, 90 °, or the like.
In the embodiment of the application, after the display decision module receives the included angle between the first screen and the second screen, the display decision module determines whether the included angle between the first screen and the second screen is smaller than an angle threshold 4.
In some embodiments, if the display decision module determines that the angle between the first screen and the second screen is greater than or equal to the angle threshold 4, the display decision module does nothing. Namely, the mobile phone continues to display the image collected by the camera by the inner screen.
In other embodiments, if the display decision module determines that the included angle between the first screen and the second screen is smaller than the angle threshold 4, the display decision module performs an initialization operation on the screens. The display decision module performs initialization operations such as electrifying the external screen and configuring a register sequence, so that when the mobile phone determines to adopt the external screen to display the image information, the time for displaying the image on the external screen is shortened, and the image display efficiency is improved.
In an embodiment of the application, the display decision module marks the screen state identifier to mark a screen for displaying the image. Wherein the screen status identifier is used to uniquely identify the screen currently used to display the image. For example, the display decision module marks the screen state identifier as an outer screen.
Step 1007, the camera hardware abstraction module determines that the included angle is smaller than the angle threshold 5, and triggers the external screen front camera to perform initialization operation.
The angle threshold 5 is an angle at which the mobile phone starts to perform initialization operation on the front-facing camera of the outer screen. The angle threshold 5 may be the same as or different from the angle threshold 4, and is not limited herein. For example, the angle threshold 5 may be 85 °, 90 °, 95 °, etc.
In the embodiment of the application, the camera hardware abstraction module determines that an included angle between the first screen and the second screen is smaller than an angle threshold value 5, and the camera hardware abstraction module triggers the front-facing camera of the outer screen to perform camera initialization operation. For example, the camera hardware abstraction module triggers the front-facing camera of the external screen to execute operations such as power-on and register sequence configuration.
In some embodiments, if the angle threshold 4 is the same as the angle threshold 5, after the sensor hardware abstraction module and the camera hardware abstraction module respectively obtain the included angle between the first screen and the second screen, the mobile phone may synchronously execute the step 1006 and the step 1007, so that the time for switching the screens and the cameras is shortened, and the efficiency for switching the screens and the cameras is improved.
Step 1008, the camera hardware abstraction module determines that the included angle is smaller than the angle threshold 6.
The angle threshold 6 is the maximum angle for the mobile phone to determine whether the internal screen front-facing camera is switched to the external screen front-facing camera to acquire images. The angle threshold 6 is less than the angle threshold 5. The angle threshold 6 may be an angle threshold preset by the user, or an angle value determined by the mobile phone according to the habit of the user using the screen. For example, the angle threshold 6 may be set to 55 °, 60 °, 63 °, or the like.
The mobile phone comprises a mobile phone body, a first screen, a second screen, an inner screen, a camera and a camera, wherein the inner screen is arranged on the mobile phone body, the outer screen is arranged on the mobile phone body, the inner screen is arranged on the mobile phone body, the outer screen is arranged on the mobile phone body, the camera is arranged on the mobile phone body, the inner screen is arranged on the mobile phone body, the outer screen is arranged on the mobile phone body, the inner screen is arranged on the mobile phone body, and the camera is arranged on the mobile phone body.
In step 1009, the camera hardware abstraction module sends a screen state identifier request to the display decision module.
Step 1010, the display decision module sends a screen state identifier to the camera hardware abstraction module.
And step 1011, the camera hardware abstraction module determines that the screen state identifier is an outer screen, and determines that an image is acquired by adopting an outer screen front camera and is sent to the outer screen for display.
In the embodiment of the application, the camera hardware abstraction module determines that an included angle between the first screen and the second screen is smaller than an angle threshold 6, and determines that the screen state identifier is an external screen. The camera hardware abstraction module determines to adopt the front camera of the outer screen to collect images and determines to display the images collected by the front camera of the outer screen on the outer screen.
And step 1012, the external screen front camera drives and controls the external screen front camera to acquire and display images.
In the embodiment of the application, when the camera hardware abstraction module determines to display the preview image acquired by the front-facing outer-screen camera, the front-facing outer-screen camera drives and controls the front-facing outer-screen camera to acquire the image, and the preview image acquired by the front-facing outer-screen camera is displayed on the outer screen.
For example, the driving of the front-mounted outer screen camera can control the MIPI interface switch to close the image showing path of the front-mounted inner screen camera and open the image showing path of the front-mounted outer screen camera, so that the aim of switching the image showing paths of the cameras is fulfilled.
Illustratively, when the folding screen of the mobile phone is in the unfolded state, and when the microcomputer in the mobile phone responds to the user operation to perform video call, the front-mounted camera of the inner screen of the mobile phone collects the image information of the user, and displays the collected image information on the inner screen of the mobile phone, which is shown in fig. 13 (a). In the process of switching the folding screen of the mobile phone from the unfolding state to the folding state, if the display decision module determines that the included angle theta between the first screen and the second screen is smaller than the angle threshold 4 (for example, 95 degrees), the display decision module makes preparations for screen switching. The camera hardware abstraction module determines that an included angle between the first screen and the second screen is smaller than an angle threshold value 5, and the camera hardware abstraction module performs switching preparation on a front camera of the outer screen. At this time, the inner screen of the mobile phone continues to display the image information collected by the inner screen camera, see (b) in fig. 13. The camera hardware abstraction module determines that the included angle between the first screen and the second screen of the folded screen is smaller than an angle threshold 6 (for example, 55 °), and the camera hardware abstraction module determines that an outer screen front camera is used for acquiring an image, and determines that a preview image acquired by the outer screen front camera is sent and displayed on the outer screen, which is shown in (c) in fig. 13.
Therefore, when the display decision module determines whether to switch the screen according to the angle of the current screen, the camera hardware abstraction module synchronously determines whether to make switching preparation for the front-facing outer screen camera according to the angle of the current screen, and determines whether to switch to the front-facing outer screen camera. When the camera hardware abstraction module determines to adopt the front-facing camera of the outer screen to send and display images, the outer screen can directly display preview images collected by the front-facing camera of the outer screen, so that the problems that the switching time of the camera is long, the efficiency is low and the like because the front-facing camera of the outer screen starts to perform initialization operation when a mobile phone determines to adopt the front-facing camera of the outer screen to send and display images in the related technology are solved, and the sending and displaying speed of the camera is improved.
In the embodiment of the application, when the electronic device determines that the included angle between the first screen and the second screen of the folding screen meets the angle threshold value of the switching stage through the kernel layer, the kernel layer directly triggers the image collected by the target camera to be sent and displayed. In addition, when the kernel layer determines that the included angle between the first screen and the second screen meets the angle threshold value in the switching preparation stage, the kernel layer controls the target camera to perform camera initialization operation (for example, power-on, register sequence configuration and the like).
For example, as shown in fig. 14, the camera application programs in the mobile phone use front cameras to shoot images in response to shooting operations (for example, clicking or touching operations) of the user, or the social contact application programs in the mobile phone (for example, WeChat applications) use front cameras to perform video calls or shoot images in response to the operations of the user. The sensor driver of the inner core layer can acquire data collected by at least one sensor in real time or periodically. The sensor drive determines an included angle between the first screen and the second screen according to data collected by at least one sensor, and sends the angle to the kernel layer. And after the inner core layer determines a target camera for sending and displaying images according to the included angle between the first screen and the second screen, the inner core layer triggers the target camera to send and display the images. Therefore, the electronic equipment can determine the target camera for sending and displaying the image in the kernel layer without interacting with the hardware abstraction layer and the application program framework layer, so that the cross-service/assembly interaction process is reduced, the decision time is saved, and the switching efficiency of the front-mounted camera of the inner screen and the front-mounted camera of the outer screen is improved.
In addition, after the kernel layer acquires the included angle between the first screen and the second screen, when the kernel layer determines that the included angle between the first screen and the second screen meets the angle threshold value of the camera switching preparation stage, the kernel layer can control the target camera to perform camera initialization operation. Therefore, when the kernel layer determines to adopt the target camera to send and display the image, the target camera is directly controlled to send and display the image, the preparation work of the camera is saved, and the sending and displaying speed of the camera is improved.
In a possible scene, when the mobile phone adopts the external screen front-facing camera to collect images, the mobile phone responds to the operation of a user, and in the process that the folding screen of the mobile phone is switched from the folding state to the unfolding state, the sensor drive can collect the information of the sensor arranged in the mobile phone in real time or periodically. After the sensor driver determines the included angle between the first screen and the second screen of the mobile phone according to the sensor information, the sensor driver can send the included angle between the first screen and the second screen to the camera driver of the inner core layer in real time or periodically. The camera drive can confirm whether to switch outer screen leading camera to inner screen leading camera according to the contained angle between first screen and the second screen. Specific implementation procedures can be exemplarily seen in the implementation procedure of fig. 15 as follows.
The above process is explained in detail with reference to fig. 15, and fig. 15 is a flowchart illustrating a sixth image display method according to an embodiment of the present application.
As shown in fig. 15, the display method may include the steps of:
step 1301, the sensor drive acquires data acquired by the sensor, and determines an included angle between the first screen and the second screen according to the data acquired by the sensor.
In the embodiment of the application, the sensor driver may acquire data collected by at least one sensor, such as an angle sensor, an acceleration sensor, a gyroscope sensor, and the like, in real time or periodically. The sensor drive can determine an included angle between the first screen and the second screen according to the acquired data acquired by the sensor. For example, the sensor driver may acquire data acquired by the acceleration sensor and the gyroscope sensor, and then determine an angle between the first screen and the second screen of the folding screen of the mobile phone according to the acceleration sensor and the gyroscope sensor. The specific implementation process of determining the included angle between the first screen and the second screen by the sensor driver according to the acceleration sensor and the gyroscope sensor may refer to the description process of fig. 6 and fig. 7, and is not described here again.
Step 1302, the sensor driver sends an angle between the first screen and the second screen to the camera driver.
And step 1303, the camera drives to determine that the angle is larger than the angle threshold 7, and the inner screen front camera is triggered to carry out initialization operation.
In the embodiment of the application, after the sensor driver determines the included angle between the first screen and the second screen of the folding screen in real time or periodically, the sensor driver can send the included angle between the first screen and the second screen to the camera driver in real time or periodically. After the camera driver receives the included angle between the first screen and the second screen sent by the sensor driver, whether the included angle between the first screen and the second screen is larger than an angle threshold value 7 is judged. And if the camera drive determines that the received included angle between the first screen and the second screen is greater than an angle threshold value 7, the camera drive controls the drive of the inner screen front camera to trigger the inner screen front camera to perform camera initialization operation. For example, the camera drives and controls the front-facing camera of the inner screen to perform initialization operations such as power-on and register sequence configuration.
And 1304, driving the camera to determine that the included angle is larger than an angle threshold value 8, and controlling the front camera of the inner screen to send and display images.
The angle threshold value 8 is used for determining the angle of the mobile phone for acquiring images by switching the outer screen front-facing camera into the inner screen front-facing camera. The angle threshold 8 is greater than the angle threshold 7. The angle threshold 8 may be an angle threshold preset by the user, or an angle value determined by the mobile phone according to the habit of the user using the screen. For example, the angle threshold 8 may be set to 85 °, 95 °, or the like. The angle threshold 8 may be the same as or different from the angle threshold 3, and is not limited herein.
In the embodiment of the application, after the camera driver receives the included angle between the first screen and the second screen sent by the sensor driver in real time or periodically, and when the camera driver determines that the included angle between the first screen and the second screen is greater than an angle threshold value 8, the camera driver directly controls the inner screen front camera to trigger the inner screen front camera to send and display images. For example, the drive of the front camera of the inner screen can control the MIPI interface switch to close the image showing channel of the front camera of the outer screen and open the image showing channel of the front camera of the inner screen, so that the aim of switching the image showing channel of the cameras is fulfilled.
The mobile phone comprises a folding screen, an outer screen, a camera and a camera, wherein the folding screen is arranged on the outer screen, the outer screen is arranged on the outer screen, the camera is arranged on the inner screen, the outer screen is arranged on the outer screen, the camera is arranged on the inner screen, and the outer screen is arranged on the outer screen.
Continuing with fig. 11 as an example, when the foldable screen of the mobile phone is in a folded state, and the micro information in the mobile phone responds to the user operation to perform a video call, the front-mounted camera on the outer screen of the mobile phone collects image information of the user, and displays the collected image information on the outer screen of the mobile phone, which is shown in (a) of fig. 11. In the process that the folding screen of the mobile phone is switched to the display state from the folding state, if the camera drives to determine that the included angle between the first screen and the second screen is greater than an angle threshold value of 7 (for example, 65 degrees), the camera drives to trigger the front camera of the inner screen to perform switching preparation work. The external screen of the mobile phone continues to display the image information collected by the front camera of the external screen, and the internal screen of the mobile phone does not display the image information, as shown in fig. 11 (b). The camera drive determines that the included angle θ between the first screen and the second screen of the mobile phone is greater than an angle threshold 8 (for example, 85 °), and the camera drive controls the inner screen front-facing camera to drive and trigger the inner screen front-facing camera to acquire an image, and sends and displays a preview image acquired by the inner screen front-facing camera on the inner screen, see (c) in fig. 11. Therefore, the camera drive determines whether to switch the inner and outer screen cameras according to the angle of the current screen of the mobile phone, the problem that the inner screen front-mounted camera does not send a display image due to the fact that the switching speed of the camera is low after the mobile phone is switched from an outer screen display image to an inner screen display image is avoided, the camera drive triggers the inner screen camera to perform initialization operation in advance, and the efficiency of switching the inner and outer screen cameras is improved.
According to the mobile phone, in the process that the state of the folding screen of the mobile phone is switched from the folding state to the unfolding state, when the camera drives to determine that the included angle between the first screen and the second screen reaches the angle threshold value 7, the camera drives to trigger the front camera of the inner screen to perform switching preparation, and therefore the purpose of initializing the front camera of the inner screen in advance is achieved. When the camera drives to determine that the included angle between the first screen and the second screen reaches the angle threshold value 8, the camera drives the front-facing camera of the inner screen to directly control the transmission of the images, so that the purpose of rapidly switching the screen to transmit the images is achieved, and the efficiency of camera switching is improved. In addition, after the sensor driver determines the included angle between the first screen and the second screen, the sensor driver directly sends the included angle between the first screen and the second screen to the camera driver, whether the camera is switched or not is decided by the camera driver to send the display, the sensor driver does not need to upload information collected by the sensor to a hardware abstraction layer, the interaction flow of crossing components and processes is reduced, and the switching efficiency of the camera is improved.
In another possible scenario, when the mobile phone adopts the front-facing camera of the inner screen to acquire an image, the mobile phone responds to the operation of a user, and in the process that the folding screen of the mobile phone is switched from the unfolding state to the folding state, the sensor drive can acquire the information of the sensor arranged in the mobile phone in real time or periodically. After the sensor driver determines the included angle between the first screen and the second screen of the folding screen according to the sensor information, the sensor driver can send the included angle between the first screen and the second screen to the camera driver of the kernel layer in real time or periodically. The camera drive can confirm whether to switch the leading camera of interior screen to the leading camera of outer screen according to the contained angle between first screen and the second screen. Specific implementation procedures can be exemplarily seen in the implementation procedure of fig. 16 as follows.
The above process is explained in detail with reference to fig. 16, and fig. 16 is a flowchart illustrating an image display method according to an embodiment of the present disclosure.
As shown in fig. 16, the display method may include the steps of:
step 1401, the sensor driver acquires data collected by the sensor, and determines an included angle between the first screen and the second screen according to the data collected by the sensor.
Step 1402, the sensor driver sends the angle between the first screen and the second screen to the camera driver.
In the embodiment of the present application, the implementation processes of step 1401 to step 1402 may refer to the implementation processes of step 1301 to step 1302, which are not described herein again.
And step 1403, the camera drives to determine that the included angle is smaller than an angle threshold value 9, and the inner screen front-mounted camera is triggered to carry out initialization operation.
In the embodiment of the application, after the sensor driver determines the included angle between the first screen and the second screen of the folding screen in real time or periodically, the sensor driver can send the included angle between the first screen and the second screen to the camera driver in real time or periodically. After the camera driver receives the included angle between the first screen and the second screen sent by the sensor driver, whether the included angle between the first screen and the second screen is smaller than an angle threshold value 9 is judged. And if the camera drive determines that the received included angle between the first screen and the second screen is smaller than an angle threshold value 9, the camera drive controls the outer screen front-facing camera to drive and trigger the outer screen front-facing camera to carry out initialization operation. For example, the camera drives and controls the front-facing camera of the outer screen to perform initialization operations such as power-on and register sequence configuration.
And step 1404, driving the camera to determine that the included angle is smaller than an angle threshold value 10, and controlling the front-facing camera of the outer screen to send and display images.
The angle threshold 10 is an angle for the mobile phone to determine that the inner screen front-facing camera is switched to the outer screen front-facing camera to acquire an image. The angle threshold 10 is smaller than the angle threshold 9. The angle threshold 10 may be an angle threshold preset by a user, or an angle value determined by the mobile phone according to the habit of the user using the screen. For example, the angle threshold 10 may be set to 85 °, 95 °, or the like.
In the embodiment of the application, after the camera driver receives an included angle between the first screen and the second screen sent by the sensor driver in real time or periodically, when the camera driver determines that the included angle between the first screen and the second screen is smaller than an angle threshold 10, the camera driver directly controls the front-facing camera of the outer screen to drive and trigger the front-facing camera of the outer screen to send and display images. For example, the drive of the front-mounted outer screen camera can control the MIPI interface switch to close the map output path of the front-mounted inner screen camera and open the map output path of the front-mounted outer screen camera, so that the aim of switching the map output paths of the cameras is fulfilled.
The mobile phone comprises a folding screen, a first screen, a second screen, an inner screen, a camera and an outer screen, wherein the folding screen is arranged on the inner screen, the inner screen is arranged on the outer screen, the outer screen is arranged on the inner screen, and the inner screen and the outer screen are arranged on the outer screen.
Continuing with fig. 13 as an example, when the foldable screen of the mobile phone is in the unfolded state, and the micro information in the mobile phone responds to the user operation to perform a video call, the front-mounted camera of the inner screen of the mobile phone collects image information of the user, and displays the collected image information on the inner screen of the mobile phone, which is shown in (a) of fig. 13. In the process that the folding screen of the mobile phone is switched to the folding state from the unfolding state, if the camera drives to determine that the included angle theta between the first screen and the second screen is smaller than the angle threshold value 9, the camera drives to make switching preparation for the front camera of the outer screen. At this time, the inner screen of the mobile phone continues to display the image information collected by the inner screen camera, see (b) in fig. 13. The camera driver determines that the angle of the current screen of the mobile phone is smaller than an angle threshold value of 10 (for example, 55 °), and the camera driver determines to acquire an image by using the front-facing outer screen camera and determines to display the image acquired by the front-facing outer screen camera on the outer screen, see (c) in fig. 13.
Therefore, in the process that the state of the folding screen of the mobile phone is switched from the unfolding state to the folding state, when the camera drives to determine that the angle of the current screen of the mobile phone reaches the angle threshold value 9, the camera drives to trigger the front-facing camera of the outer screen to make switching preparation, and therefore the purpose of initializing the front-facing camera of the outer screen in advance is achieved. When the camera drive determines that the included angle between the first screen and the second screen reaches the angle threshold value of 10, the camera drive directly controls the front camera of the outer screen to send and display images, so that the purpose of quickly switching the screens to send and display images is achieved, and the camera switching efficiency is improved. In addition, after the sensor driver determines the included angle between the first screen and the second screen, the sensor driver directly sends the included angle between the first screen and the second screen to the camera driver, the camera driver decides whether to switch the camera for display or not, the camera driver does not need to upload to a camera hardware abstraction module, the interaction flow of crossing components and processes is reduced, and the switching efficiency of the camera is improved.
In fig. 15 and fig. 16, when the camera drives and controls the switching of the front-facing camera of the inner screen and the outer screen, the screen of the mobile phone may not be switched, so that there may be situations such as an image display error or a screen display failure after the camera is switched. Therefore, when the camera driver determines to switch the image showing path of the front camera of the inner screen and the outer screen, the camera driver can acquire the screen state identifier. When the camera drive determines that the front camera of the inner screen is adopted to send the display image and the camera drive determines that the screen state identifier is the inner screen, the camera drive adopts the front camera of the inner screen to send the display image. Similarly, when the camera drive determines that the external screen front camera is used for sending the display image and the camera drive determines that the screen state identifier is the external screen, the camera drive sends the display image by using the external screen front camera. Therefore, the situations that the screen is sent and displayed wrongly or the screen is not sent and displayed and the like when the physical state of the folding screen of the mobile phone is changed are avoided.
In the embodiment of the application, after the sensor drives and determines the included angle between the first screen and the second screen, the included angle between the first screen and the second screen can be sent to the display decision module through the hardware abstraction module. And after receiving the included angle between the first screen and the second screen, the display decision module marks the screen state according to the included angle between the first screen and the second screen.
In some embodiments, when the mobile phone uses the external screen front-facing camera to capture an image in response to a user operation, and the mobile phone triggers the internal screen front-facing camera to perform a pre-switching operation in a process that the folding screen of the mobile phone is switched from the folding state to the unfolding state, the mobile phone may display an image collected by the external screen front-facing camera on the second screen of the internal screen. When the mobile phone triggers the front camera of the inner screen to send and display images, the inner screen formed by the first screen and the second screen displays the images collected by the front camera of the inner screen. Therefore, the mobile phone displays the images collected by the front camera on the outer screen through the second screen, the problems that the inner screen cannot display the images in time due to slow screen switching of the mobile phone, such as black screen or white screen and the like after the camera is switched are avoided, and the use experience of a user is improved.
For example, as shown in fig. 17, when the folding screen of the mobile phone is in a folding state, and the micro information in the mobile phone responds to the user operation to perform a video call, the front-facing camera of the outer screen of the mobile phone collects image information of the user, and displays the collected image information on the outer screen of the mobile phone, which is shown in (a) of fig. 17. In the process of switching the folding screen of the mobile phone from the folding state to the display state, when the mobile phone triggers the front-facing camera of the inner screen to perform switching preparation work, the display decision module controls the second screen of the mobile phone to display the image information collected by the front-facing camera of the outer screen, as shown in (b) of fig. 17. The mobile phone triggers the front camera of the inner screen to acquire images, and the images acquired by the front camera of the inner screen are sent and displayed on the inner screen, and the images acquired by the front camera of the inner screen are displayed on a large screen formed by the first screen and the second screen of the mobile phone, which is shown in fig. 17 (c).
In summary, in the embodiment of the present application, when the camera hardware abstraction module in the hardware abstraction layer meets the angle threshold in the switching preparation stage according to the included angle between the first screen and the second screen, the camera is triggered to perform the camera initialization operation, and when the camera hardware abstraction module determines that the included angle between the first screen and the second screen meets the threshold in the switching stage, the camera that has completed the initialization operation is triggered to perform the video image sending switching, so that the problems that the camera switching time is long, the efficiency is low, and the like, which are caused by the fact that the mobile phone starts to perform the initialization operation on the camera only after the camera decision module in the application framework layer in the related art determines the camera for sending and displaying the image, are solved, and the video sending rate of the camera is improved. In addition, because the camera has already finished the pre-switching operation, when the camera hardware abstraction module sends and shows the image based on the camera that finishes the pre-switching operation, directly trigger this camera and send and show the image for the camera switches more smoothly.
And when the camera drive determines that the mobile phone meets the angle threshold value of the switching preparation stage, triggering a target camera which has completed the initialization operation to send and display images for switching. Therefore, the sensor driver does not need to upload information acquired by the sensor to the hardware abstraction layer and the application program framework layer, and the camera driver in the kernel layer decides whether to switch the camera for preparation and send a display image for switching, so that the interaction process of crossing components and processes is reduced, and the switching efficiency of the camera is improved.
Compared with the prior art that when the camera of the electronic equipment is switched along with the physical state of the screen, the whole execution flow is longer, so that the defect of low sending efficiency exists when the camera is switched. In the embodiment of the application, switching preparation work of the target camera is added in the camera hardware abstraction module or the camera driver, and the camera switching time is shortened. After the camera hardware abstraction module or the camera driver determines that the angle of the current screen of the electronic equipment meets the angle threshold value of the switching stage, the target camera is directly controlled to switch and send the display, the decision-making process is simplified, the cross-component and cross-process interaction process is reduced, the camera switching efficiency is improved, and therefore the user experience is improved.
As shown in fig. 18, an embodiment of the present application discloses an electronic device, which may be the above-mentioned mobile phone. The electronic device may specifically include: a touch screen 1801, the touch screen 1801 including a touch sensor 1806 and a display screen 1807; one or more processors 1802; a memory 1803; one or more application programs (not shown); and one or more computer programs 1804, which may be connected via one or more communication buses 1805. Wherein the one or more computer programs 1804 are stored in the memory 1803 and configured to be executed by the one or more processors 1802, the one or more computer programs 1804 include instructions that may be used to perform related steps in the embodiments described above.
It is to be understood that the electronic devices and the like described above include hardware structures and/or software modules for performing the respective functions in order to realize the functions described above. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
In the embodiment of the present application, the electronic device and the like may be divided into functional modules according to the method example, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In a case of adopting a manner that each function module is divided corresponding to each function, the electronic device according to one possible composition diagram of the above embodiment may include: display unit, transmission unit and processing unit etc. It should be noted that all relevant contents of each step related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
Embodiments of the present application also provide an electronic device including one or more processors and one or more memories. The one or more memories are coupled to the one or more processors for storing computer program code comprising computer instructions which, when executed by the one or more processors, cause the electronic device to perform the associated method steps described above to implement the image display method in the embodiments described above.
Embodiments of the present application further provide a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on an electronic device, the electronic device executes the above related method steps to implement the image display method in the above embodiments.
Embodiments of the present application further provide a computer program product, which includes computer instructions, when the computer instructions are run on an electronic device, cause the electronic device to execute the above related method steps to implement the image display method in the above embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be specifically a chip, a component or a module, and may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the device can execute the image display method executed by the electronic equipment in the above method embodiments.
In addition, the electronic device, the computer-readable storage medium, the computer program product, or the apparatus provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer-readable storage medium, the computer program product, or the apparatus can refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as flash memory, removable hard drive, read-only memory, random-access memory, magnetic or optical disk, etc.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. An image display method is applied to an electronic device with a folding screen, the electronic device comprises an inner screen and an outer screen, the inner screen comprises a first screen and a second screen, the electronic device comprises a first camera and a second camera, the first camera is a front camera of the inner screen, the second camera is a front camera of the outer screen, and the method comprises the following steps:
under the condition that the included angle between the first screen and the second screen reaches the angle threshold value of the switching preparation stage, triggering a target camera in the first camera and the second camera to carry out camera initialization operation;
under the condition that the included angle between the first screen and the second screen reaches a preset angle range, triggering the image collected by the target camera to be sent and displayed;
when the folding screen is switched from the unfolding state to the folding state, and under the condition that an included angle between the first screen and the second screen is smaller than a third preset angle, triggering the second camera to carry out camera initialization operation; when the included angle between the first screen and the second screen is smaller than a first preset angle and the outer screen presents images, triggering the images collected by the second camera to be presented on the outer screen, wherein the third preset angle is larger than the first preset angle;
in the process that the folding screen is switched from the folding state to the unfolding state, under the condition that an included angle between the first screen and the second screen is larger than a fourth preset angle, triggering the first camera to carry out camera initialization operation; the included angle between the first screen and the second screen is greater than a second preset angle, and the included angle triggers the image collected by the first camera under the condition that the image is presented by the inner screen, wherein the fourth preset angle is smaller than the second preset angle.
2. The method of claim 1, wherein the electronic device further comprises an application framework layer and a first layer, wherein the first layer is located between the application framework layer and a hardware unit of the electronic device, and wherein triggering a target camera of the first camera and the second camera to perform a camera initialization operation comprises:
the first layer triggers a target camera in the first camera and the second camera to carry out camera initialization operation;
the triggering of the image collected by the target camera is sent to the display, and the triggering of the image collected by the target camera comprises the following steps:
and the first layer triggers the image collected by the target camera to be displayed.
3. The method of claim 2, wherein the triggering the image captured by the second camera to be presented on the outer screen comprises:
and the first layer triggers the image acquired by the second camera to be presented on the outer screen.
4. The method of claim 2, wherein the triggering the presentation of the image captured by the first camera on the inner screen comprises:
the first layer triggers the image collected by the first camera to be presented on the inner screen.
5. The method of claim 2, wherein triggering the second camera to perform a camera initialization operation comprises:
and the first layer triggers the second camera to carry out camera initialization operation.
6. The method of claim 2, wherein triggering the first camera to perform a camera initialization operation comprises:
and the first layer triggers the first camera to carry out camera initialization operation.
7. The method of claim 2, wherein before the first layer triggers presentation of the image captured by the target camera, the method further comprises:
the first layer acquires a screen state identifier of the electronic equipment;
and the first layer determines a target screen for presenting the image from the inner screen and the outer screen according to the screen state identifier.
8. The method of claim 3, wherein before the first layer triggers the image captured by the second camera to be presented on the outer screen, the method further comprises:
the application program framework layer obtains an included angle between the first screen and the second screen;
under the condition that the included angle between the first screen and the second screen is smaller than a fifth preset angle, the application program framework layer triggers the outer screen to carry out screen initialization operation;
under the condition that an included angle between the first screen and the second screen is smaller than a sixth preset angle, the application program framework layer triggers the electronic equipment to switch the inner screen into the outer screen, and determines that the screen state identifier of the electronic equipment is a first identifier, the first identifier is used for indicating that the outer screen presents a display image, wherein the sixth preset angle is smaller than the fifth preset angle.
9. The method of claim 4, wherein before the first layer triggers the rendering of the image acquired by the first camera on the inner screen, the method further comprises:
the application program framework layer acquires an included angle between the first screen and the second screen;
under the condition that the included angle between the first screen and the second screen is larger than a seventh preset angle, the application program framework layer triggers the inner screen to carry out screen initialization operation;
under the condition that an included angle between the first screen and the second screen is larger than an eighth preset angle, the application program framework layer triggers the electronic equipment to switch the outer screen into the outer screen and determines that the screen state identifier of the electronic equipment is a second identifier, the second identifier is used for indicating that the inner screen presents a display image, wherein the eighth preset angle is larger than a seventh preset angle.
10. The method according to claim 2, wherein before triggering a target camera of the first camera and the second camera to perform a camera initialization operation when an included angle between the first screen and the second screen reaches an angle threshold of a handover preparation phase, the method further comprises:
the first layer determines an included angle between the first screen and the second screen.
11. The method of claim 10, wherein the first layer is a hardware abstraction layer, wherein the electronic device further comprises a kernel layer, wherein the kernel layer is located between the hardware abstraction layer and a hardware unit of the electronic device, and wherein the first layer determines an angle between the first screen and the second screen, comprising:
the hardware abstraction layer acquires information acquired by a sensor through the kernel layer, wherein the sensor comprises at least one of an angle sensor, a gyroscope sensor and an acceleration sensor;
and the hardware abstraction layer determines an included angle between the first screen and the second screen according to the information acquired by the sensor.
12. The method of claim 10, wherein the first layer is a kernel layer, the first layer determining an angle between the first screen and the second screen, comprising:
the inner core layer acquires information acquired by a sensor, wherein the sensor comprises at least one of an angle sensor, a gyroscope sensor and an acceleration sensor;
and the inner core layer determines an included angle between the first screen and the second screen according to the information acquired by the sensor.
13. The method of claim 1, wherein the electronic device is in a video call scenario,
under the condition that the folding screen is in an unfolded state, the inner screen displays an image collected by the first camera; alternatively, the first and second electrodes may be,
and under the condition that the folding screen is in a folding state, the outer screen displays the image collected by the second camera.
14. The method of claim 1, wherein triggering a target camera of the first camera and the second camera to perform a camera initialization operation comprises:
and triggering the target camera to carry out power-on and/or parameter configuration operation.
15. An electronic device, comprising:
a folding screen comprising at least two screens;
one or more processors;
a memory;
wherein the memory has stored therein one or more computer programs, the one or more computer programs comprising instructions, which when executed by the electronic device, cause the electronic device to perform the image display method of any of claims 1-14.
16. A computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the image display method of any one of claims 1-14.
CN202210185763.2A 2022-02-28 2022-02-28 Image display method and electronic equipment Active CN114257671B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210185763.2A CN114257671B (en) 2022-02-28 2022-02-28 Image display method and electronic equipment
CN202210806906.7A CN116723257A (en) 2022-02-28 2022-02-28 Image display method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210185763.2A CN114257671B (en) 2022-02-28 2022-02-28 Image display method and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210806906.7A Division CN116723257A (en) 2022-02-28 2022-02-28 Image display method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114257671A CN114257671A (en) 2022-03-29
CN114257671B true CN114257671B (en) 2022-07-19

Family

ID=80800070

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210185763.2A Active CN114257671B (en) 2022-02-28 2022-02-28 Image display method and electronic equipment
CN202210806906.7A Pending CN116723257A (en) 2022-02-28 2022-02-28 Image display method and electronic equipment

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210806906.7A Pending CN116723257A (en) 2022-02-28 2022-02-28 Image display method and electronic equipment

Country Status (1)

Country Link
CN (2) CN114257671B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115037834B (en) * 2022-08-09 2023-02-07 荣耀终端有限公司 Method for triggering leather sheath mode and electronic equipment
CN117812181A (en) * 2022-09-30 2024-04-02 荣耀终端有限公司 Display screen switching method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201937641U (en) * 2010-12-20 2011-08-17 上海华勤通讯技术有限公司 Multi-screen mobile phone
CN104699391A (en) * 2015-04-07 2015-06-10 联想(北京)有限公司 Electronic equipment and control method for cameras thereof
CN109688253A (en) * 2019-02-28 2019-04-26 维沃移动通信有限公司 A kind of image pickup method and terminal
CN110995990A (en) * 2019-11-27 2020-04-10 维沃移动通信有限公司 Camera control method and electronic equipment
CN111263005A (en) * 2020-01-21 2020-06-09 华为技术有限公司 Display method and related device of folding screen
WO2020253804A1 (en) * 2019-06-21 2020-12-24 华为技术有限公司 Unlocking method and electronic device
CN113840070A (en) * 2021-09-18 2021-12-24 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2992507B1 (en) * 2013-05-02 2018-07-25 Qualcomm Incorporated Methods for facilitating computer vision application initialization
CN107368150A (en) * 2017-06-30 2017-11-21 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107748687B (en) * 2017-10-10 2019-12-31 晶晨半导体(上海)股份有限公司 Method for controlling startup display picture of intelligent equipment and intelligent equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201937641U (en) * 2010-12-20 2011-08-17 上海华勤通讯技术有限公司 Multi-screen mobile phone
CN104699391A (en) * 2015-04-07 2015-06-10 联想(北京)有限公司 Electronic equipment and control method for cameras thereof
CN109688253A (en) * 2019-02-28 2019-04-26 维沃移动通信有限公司 A kind of image pickup method and terminal
WO2020253804A1 (en) * 2019-06-21 2020-12-24 华为技术有限公司 Unlocking method and electronic device
CN110995990A (en) * 2019-11-27 2020-04-10 维沃移动通信有限公司 Camera control method and electronic equipment
CN111263005A (en) * 2020-01-21 2020-06-09 华为技术有限公司 Display method and related device of folding screen
CN113840070A (en) * 2021-09-18 2021-12-24 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and medium

Also Published As

Publication number Publication date
CN114257671A (en) 2022-03-29
CN116723257A (en) 2023-09-08

Similar Documents

Publication Publication Date Title
EP4084450B1 (en) Display method for foldable screen, and related apparatus
CN110536004B (en) Method for applying multiple sensors to electronic equipment with flexible screen and electronic equipment
WO2020211532A1 (en) Display control method and related apparatus
CN112130742B (en) Full screen display method and device of mobile terminal
CN112217923B (en) Display method of flexible screen and terminal
WO2020168965A1 (en) Method for controlling electronic device having folding screen, and electronic device
WO2021052279A1 (en) Foldable screen display method and electronic device
WO2021036771A1 (en) Electronic device having foldable screen, and display method
WO2021063311A1 (en) Display control method for electronic device having foldable screen, and electronic device
CN114257670B (en) Display method of electronic equipment with folding screen
CN112860359A (en) Display method and related device of folding screen
CN114125130B (en) Method for controlling communication service state, terminal device and readable storage medium
US20220174143A1 (en) Message notification method and electronic device
CN114556294A (en) Theme switching method and theme switching device
CN114257671B (en) Image display method and electronic equipment
WO2023103951A1 (en) Display method for foldable screen and related apparatus
WO2021208723A1 (en) Full-screen display method and apparatus, and electronic device
CN113448382A (en) Multi-screen display electronic device and multi-screen display method of electronic device
CN112445276A (en) Folding screen display application method and electronic equipment
CN116048436B (en) Application interface display method, electronic device and storage medium
CN115421619A (en) Window display method and electronic equipment
CN117389496A (en) Folding screen display method, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant