WO2021063074A1 - 一种分屏显示方法与电子设备 - Google Patents

一种分屏显示方法与电子设备 Download PDF

Info

Publication number
WO2021063074A1
WO2021063074A1 PCT/CN2020/102488 CN2020102488W WO2021063074A1 WO 2021063074 A1 WO2021063074 A1 WO 2021063074A1 CN 2020102488 W CN2020102488 W CN 2020102488W WO 2021063074 A1 WO2021063074 A1 WO 2021063074A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
task
screen
split
electronic device
Prior art date
Application number
PCT/CN2020/102488
Other languages
English (en)
French (fr)
Inventor
罗义
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2022519756A priority Critical patent/JP2022549945A/ja
Priority to US17/764,426 priority patent/US20220357845A1/en
Priority to KR1020227011757A priority patent/KR20220058953A/ko
Priority to CN202080034954.8A priority patent/CN113811844A/zh
Priority to EP20871830.4A priority patent/EP4024183B1/en
Publication of WO2021063074A1 publication Critical patent/WO2021063074A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/001Arbitration of resources in a display system, e.g. control of access to frame buffer by video controller and/or main processor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2330/00Aspects of power supply; Aspects of display protection and defect management
    • G09G2330/04Display protection
    • G09G2330/045Protection against panel overheating
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data

Definitions

  • This application relates to the field of terminal technology, and in particular to a split-screen display method and electronic equipment.
  • the electronic device provides a split-screen mode to meet the needs of users to operate multiple applications at the same time. For example, the electronic device can display the windows of two applications at the same time.
  • the electronic device needs to be manually triggered by the user to enter the split screen mode. For example: when the electronic device displays the window of application A in full screen, the user can press and hold the historical task button, the electronic device enters the split-screen mode, and then the user manually selects the application that needs to be displayed in the split-screen window, such as clicking the icon of application B, and finally Make application A and application B be displayed in different windows.
  • the electronic device in the prior art needs to be manually triggered by the user to enter the split screen mode, and the operation process is relatively cumbersome and the intelligence is not high.
  • the embodiment of the present invention provides a split-screen display method and an electronic device, which are used to achieve the technical effect of triggering the electronic device to start the split-screen display mode through a task and improve the intelligence of the electronic device.
  • the implementation of the present invention provides a split-screen display method.
  • the method includes: when a first application is running to perform a first task, displaying a display interface corresponding to the first task on the display screen; A first operation for starting a second task is received on a display interface corresponding to a task; in response to the first operation, a split-screen display mode is started. That is to say, when the electronic device is running the first application to perform the first task, if it receives the first operation for starting the second task on the display interface corresponding to the first task, the electronic device can be triggered to enter through the second task.
  • the split-screen mode can be triggered without the user performing additional manual operations, which achieves the technical effect of triggering the electronic device to perform split-screen display based on the task, and improves the intelligence of the electronic device.
  • the electronic device may display the display interface corresponding to the first task in full screen on the display screen.
  • the effect of split-screen display through tasks can be realized in the full-screen display mode, and the full-screen mode can be switched to the split-screen mode without the user's additional manual operation, which improves the intelligence of interaction.
  • the second task may be a task in a second application, and the second application is different from the first application. That is to say, the technical solution of the embodiment of the present invention can be applied to the scene of cross-application split screen, which improves the user experience.
  • the second task may be a task in the first application. That is to say, the technical solution of the embodiment of the present invention can also be applied to the scene of split screen in the application, which can further improve the intelligence of split screen display and improve user experience.
  • the electronic device when the electronic device starts the split-screen display mode in response to the first operation, it may specifically generate at least two display windows on the display screen, and the at least two display windows include the first A display window and a second display window; a display interface corresponding to the first task is displayed in the first display window, and a display interface corresponding to the second task is displayed in the second display window. That is, when the second task is triggered, the electronic device can directly enter the split screen mode, and display the display interface of the second task in the split screen window, which improves the interaction efficiency of the electronic device.
  • the electronic device when it starts the split-screen display mode in response to the first operation, it may also generate a view of the display interface corresponding to the second task, and display the view corresponding to the first task.
  • the view is displayed on the display interface of the view; in response to the second operation for the view, the position of the split-screen display window is determined according to the second operation; the split-screen display window is displayed at the position of the split-screen display window, and the The display interface corresponding to the second task is displayed in the split-screen display window.
  • the display position of the split screen window can also be determined based on the received second operation, that is, the user can instruct the split screen by performing the second operation
  • the display position of the screen window can further improve the intelligence of electronic device interaction and improve user experience.
  • the second operation may include multiple sub-operation steps to avoid false triggering of the split-screen display and improve the reliability of the split-screen display.
  • the second operation may include a first sub-operation
  • the first sub-operation may be an operation of dragging the view or a copy of the view to a preset orientation (for example, up, down, left and right, etc.).
  • the electronic device determines that the preset orientation is the position of the split-screen display window, and after receiving the first sub-operation, displays the split-screen display window at the position of the split-screen display window.
  • the second operation may also include a second sub-operation before the first sub-operation, and the second sub-operation may be an operation of single-finger long press or two-finger long press on the view.
  • the user needs to press and hold the view with one finger or long press with two fingers to drag the view, which can prevent the user from accidentally triggering the drag view and improve the reliability of the split-screen display.
  • the split screen mode can also be canceled and the display interface of the first task displayed in full screen is restored.
  • the second operation may further include a third sub-operation after the first sub-operation, and the third sub-operation may be an operation of clicking the view.
  • the electronic device displays the display interface corresponding to the second task in the split-screen display window. That is to say, after the user drags the view to the specified solution position, he also needs to perform the operation of clicking the view to confirm that the display interface of the second task is displayed in the split screen window, which can better prevent the user from triggering the drag by mistake View to improve the reliability of split-screen display.
  • an embodiment of the present invention provides an electronic device including a display screen; one or more processors; a memory; multiple applications; and one or more computer programs; wherein the one or more computer programs are stored In the memory, the one or more computer programs include instructions.
  • the electronic device is caused to execute the following steps: For a task, display the display interface corresponding to the first task on the display screen; receive a first operation for starting a second task on the display interface corresponding to the first task; respond to the first task Operate to start the split-screen display mode.
  • the electronic device when the instruction is invoked and executed by the one or more processors, the electronic device is caused to perform the following steps: display the display corresponding to the first task on the display screen in full screen interface.
  • the second task is a task in the first application; or the second task is a task in a second application, and the second application is different from the first application .
  • the electronic device when the instruction is invoked and executed by the one or more processors, the electronic device is caused to perform the following steps: generating at least two display windows on the display screen, and the at least The two display windows include a first display window and a second display window; the display interface corresponding to the first task is displayed in the first display window, and the display interface corresponding to the second task is displayed in the second display window.
  • UI user interface
  • the electronic device when the instruction is invoked and executed by the one or more processors, the electronic device is caused to perform the following steps: generate a view of the display interface corresponding to the second task, and display it in the The view is displayed on the display interface corresponding to the first task; in response to the second operation on the view, the position of the split screen display window is determined according to the second operation; the split screen is displayed at the position of the split screen display window A window is displayed, and a display interface corresponding to the second task is displayed in the split-screen display window.
  • the second operation includes a first sub-operation
  • the first sub-operation is an operation of dragging the view or a copy of the view to a preset orientation
  • the electronic device is caused to perform the following steps: determine that the preset orientation is the position of the split-screen display window; when the instructions are called for execution by the one or more processors , The electronic device is further caused to perform the following steps: after receiving the first sub-operation, display a split-screen display window at the position of the split-screen display window.
  • the second operation further includes a second sub-operation before the first sub-operation, and the second sub-operation is a single-finger long press or a two-finger long press on the view Operation.
  • the second operation further includes a third sub-operation after the first sub-operation, and the third sub-operation is an operation of clicking the view; when the instruction is When the one or more processors call for execution, the electronic device is also caused to perform the following steps: after receiving the third sub-operation, display the display interface corresponding to the second task in the split-screen display window .
  • an embodiment of the present invention provides an electronic device that includes modules/units that execute the first aspect or any one of the possible design methods of the first aspect; these modules/units can be implemented by hardware , It can also be realized by hardware executing corresponding software.
  • an embodiment of the present invention provides a computer storage medium, including computer instructions, which when the computer instructions are executed on an electronic device, cause the electronic device to execute the first aspect or the first aspect of the embodiment of the present invention. Any of the possible designs described in the split-screen display method.
  • the embodiment of the present invention provides a program product, when the program product is run on a computer, the computer executes any possible design such as the first aspect or the first aspect of the embodiment of the present invention Said split-screen display method.
  • an embodiment of the present invention provides a chip, which is coupled with a memory in an electronic device, and is used to call a computer program stored in the memory and execute the first aspect of the embodiment of the present invention and any one of the first aspect thereof Designed technical solution; in the embodiment of the present invention, "coupled” means that two components are directly or indirectly combined with each other.
  • an embodiment of the present invention provides a graphical user interface on an electronic device, the electronic device having a display screen, one or more memories, and one or more processors, and the one or more processors are used for One or more computer programs stored in the one or more memories are executed, and the graphical user interface includes the electronic device executing any possible design such as the first aspect or the first aspect of the embodiment of the present invention The graphical user interface displayed in the split-screen display method.
  • FIG. 1 is a schematic diagram of triggering an electronic device to enter a split screen mode in the prior art
  • FIG. 2 is a schematic diagram of the hardware structure of an electronic device in the implementation of the present invention.
  • Figure 3 is a schematic diagram of the software structure of an electronic device in the implementation of the present invention.
  • FIG. 5 is a schematic diagram of a split-screen display solution in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 8A is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 8B is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of another split-screen display solution in the implementation of the present invention.
  • FIG. 11 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • Figure 12 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • Figure 13 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • 15 is a schematic diagram of another split-screen display solution in an embodiment of the present invention.
  • FIG. 16 is a schematic diagram of the hardware structure of another electronic device in an embodiment of the present invention.
  • the application (application, app for short) involved in the embodiments of the present invention is a software program that can implement one or more specific functions.
  • multiple applications can be installed in a terminal device, for example, instant messaging applications, video applications, audio applications, image capturing applications, and so on.
  • instant messaging applications for example, may include short message applications, WeChat (WeChat), WhatsApp Messenger, Line, photo sharing (Instagram), Kakao Talk, DingTalk, etc.
  • Image shooting applications for example, may include camera applications (system cameras or third-party camera applications).
  • Video applications such as Youtube, Twitter, Douyin, iQiyi, Tencent Video, etc.
  • Audio applications may include Google Music, Kugou Music, Xiami, QQ Music, and so on.
  • the applications mentioned in the following embodiments may be applications that have been installed when the terminal device leaves the factory, or may be applications downloaded from the network or obtained by other terminal devices when the user uses the terminal device.
  • the split screen involved in the embodiment of the present invention refers to a technology in which a software system divides a physical screen into several display areas, which can display multiple application pages at the same time.
  • the display window involved in the embodiment of the present invention also referred to herein as "application window”, “window”, etc., refers to a display area in which a display interface of an application is displayed.
  • One application can correspond to one application window.
  • An application window can be reduced or enlarged, so that the display interface in the application window can be reduced or enlarged.
  • the display screen of the electronic device may only display one window.
  • the window may be displayed on the display screen in full screen or non-full screen, which is not limited in the embodiment of the present invention.
  • full-screen display the window occupies all the effective display area of the display screen (or the window occupies the maximum display area allowed by the system).
  • non-full-screen display the window only occupies part of the effective display area of the display (or the display area occupied by the window is less than the maximum display area allowed by the system).
  • the window occupies The display area is smaller than the maximum display area that the system allows the window to occupy.
  • the display screen of an electronic device can also display multiple windows at the same time.
  • the display area of the display screen is divided into several display areas, and one display area is a display window.
  • the user interface of different applications can be displayed in different display windows.
  • the split-screen window or split-screen display window involved in the embodiment of the present invention refers to a display interface that newly appears after the split-screen display mode is triggered.
  • the electronic device initially displays application A in full screen, that is, the display has only one full-screen window displaying application A.
  • the electronic device display screen is divided into two display areas, which display application A and application A respectively.
  • Application B then the display area corresponding to Application B is called a split screen window.
  • At least one involved in the embodiment of the present invention includes one or more; wherein, multiple refers to greater than or equal to two.
  • references described in this specification to "one embodiment” or “some embodiments”, etc. mean that one or more embodiments of the present invention include a particular feature, structure, or characteristic described in conjunction with the embodiment. Therefore, the sentences “in one embodiment”, “in some embodiments”, “in some other embodiments”, “in some other embodiments”, etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless it is specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variations all mean “including but not limited to”, unless otherwise specifically emphasized.
  • FIG. 1 is an example of the electronic device entering the split screen mode in the prior art.
  • the user browses "Taobao”
  • he intends to share a certain product with his WeChat friends.
  • Figure 1 (A) the user has copied the Taobao password of the product and intends to send it to his WeChat friends, but he does not want to quit the current one.
  • You can manually start the split-screen mode and open the WeChat interface in the split-screen window to perform sharing operations.
  • the user can trigger the electronic device to enter the split screen mode by long pressing the historical task key, and the display state after entering the split screen mode is as shown in Figure 1 (C).
  • the screen is divided into two display windows from one display window. Taobao is displayed in the left window, and the desktop is displayed in the right window (ie split-screen window). Then the user selects "WeChat” on the desktop to display the WeChat interface in the right window.
  • the user can use “Taobao” to browse products and use "WeChat” to chat with friends, as shown in Figure 1 (D), but this process requires the user to perform multiple operations manually to trigger the split screen mode.
  • the Taobao and WeChat interfaces are displayed separately in the window, which is very cumbersome.
  • There is another way to start the split screen display is to start the split screen by gesture. For example, use the knuckles to slide in the center of the screen to split the screen in two.
  • the display effect after the gesture is activated is similar to (C) in Figure 1.
  • the user selects the application (such as WeChat) that needs to be split-screen display in the split-screen window (desktop). It can be seen that the process of starting the split-screen display through gestures also requires the user to perform more cumbersome operations manually, and it also requires the user to pay more. The cost of learning.
  • the electronic device needs to be manually triggered by the user to enter the split screen mode, and the operation process is relatively cumbersome.
  • the split-screen mode in the prior art can only split the screen for cross-applications (ie, two different applications), such as the aforementioned "Taobao” and "WeChat", which have certain functional limitations. Therefore, the split-screen display solution in the prior art has the problem of low intelligence.
  • embodiments of the present invention provide a split-screen display method and electronic device.
  • the electronic device When the electronic device is running the first application to perform the first task, it displays the display interface (first display interface) of the first task on the display screen.
  • the electronic device When receiving the operation to start the second task, it directly starts the split-screen display mode , Divide the screen into multiple display windows, then display the first display interface in the first display window, run the second task and display the display interface of the second task in the second display window (second display interface) ,
  • the second task may be other tasks in the first application, or other applications, such as tasks in the second application, and there is no limitation here.
  • the embodiments of the present invention can be applied to electronic devices recognized to have display screens, for example, portable electronic devices such as mobile phones, folding screen phones, tablet computers, etc., or non-convenient electronic devices such as desktop computers and televisions, or Wearable electronic devices, such as bracelets, watches, wearable device helmets, etc., may also be vehicle-mounted devices, smart home devices (for example, televisions), etc., which are not limited in the embodiment of the present invention.
  • portable electronic devices such as mobile phones, folding screen phones, tablet computers, etc.
  • non-convenient electronic devices such as desktop computers and televisions
  • Wearable electronic devices such as bracelets, watches, wearable device helmets, etc.
  • smart home devices for example, televisions
  • the electronic device is a mobile phone as an example
  • FIG. 2 shows a schematic structural diagram of the mobile phone. As shown in FIG.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142, Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, A display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data. In some embodiments, the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the execution of the split-screen display method in the embodiment of the present invention may be controlled by the processor 110 or by calling other components to complete, for example, calling the processing program of the embodiment of the present invention stored in the internal memory 121, or calling the second part through the external memory interface 120.
  • the processing program of the embodiment of the present invention stored in the three-party device controls the display screen 194 to perform a split-screen display operation.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save pictures, videos and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, and software codes of at least one application (for example, an iQiyi application, a WeChat application, etc.).
  • the data storage area can store data (such as images, videos, etc.) generated during the use of the mobile phone 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the internal memory 121 may be used to store the computer executable program code of the split-screen display method proposed by the embodiment of the present invention, where the executable program code includes instructions.
  • the processor 110 can run the computer executable program code of the split-screen display method stored in the internal memory 121, so that the mobile phone 100 can complete the split-screen display method proposed in the embodiment of the present invention.
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the mobile phone 100, and can also be used to transfer data between the mobile phone 100 and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the mobile phone 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the wireless communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna
  • the antenna 1 of the mobile phone 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the mobile phone 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the display screen 194 can be used to display information input by the user or information provided to the user, and various menus of the mobile phone 100, and can also accept user input, such as a user's touch operation. Display the display interface of the application, etc.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the display screen 194 may include a display panel and a touch panel.
  • the display panel can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • Touch panels also known as touch screens, touch-sensitive screens, etc., can collect user contact or non-contact operations on or near it (for example, the user uses fingers, stylus and other suitable objects or accessories on the touch panel or on the touch panel).
  • the operations near the touch panel may also include somatosensory operations; the operations include single-point control operations, multi-point control operations, and other types of operations), and the corresponding connection device is driven according to a preset program.
  • the touch panel may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position and posture, and detects the signal brought by the input operation, and transmits the signal to the touch controller;
  • the touch controller receives the touch information from the touch detection device and converts it into a processor capable of The processed information is then sent to the processor 110, and can receive and execute the commands sent by the processor 110.
  • multiple types of resistive, capacitive, infrared, and surface acoustic waves can be used to implement touch panels, and any technology developed in the future can also be used to implement touch panels.
  • the touch panel can cover the display panel, and the user can cover the touch screen on the display panel according to the content displayed on the display panel (the display content includes but not limited to, soft keyboard, virtual mouse, virtual buttons, icons, etc.)
  • An operation is performed on or near the panel. After the touch panel detects the operation on or near it, it is transmitted to the processor 110 to determine the user input, and then the processor 110 provides corresponding visual output on the display panel according to the user input.
  • the touch detection device in the touch panel detects the touch operation input by the user, it sends a signal corresponding to the detected touch operation to a touch controller in real time, and the touch controller converts the signal into a touch.
  • the point coordinates are sent to the processor 110, and the processor 110 determines that the touch operation is specifically an operation to start the second task according to the received contact coordinates, and then responds to the touch operation input by the user, starts the split screen mode, and displays the display screen 194
  • the area is divided into multiple display windows (for example, divided into a first display window and a second display window), and the second task is started, the second task is displayed in the second display window, and the first task previously displayed in full screen is switched To display in the first display window.
  • the mobile phone 100 may include one or N display screens 194.
  • One or N display screens 194 can be foldably connected or flexibly connected. When multiple display screens 194 are folded, it is convenient for electronic equipment to be portable. When the display screen 194 is unfolded and connected, it is convenient for the user to use a large screen to view and improve the user experience, where N is a positive integer greater than 1.
  • the split-screen display method in the embodiment of the present invention can be applied to one display screen alone, or can be applied to connect multiple display screens to form a large screen as a whole when they are unfolded.
  • the camera 193 is used to capture still images or videos.
  • the camera 193 may include a front camera and a rear camera.
  • the mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the gyro sensor 180B may be used to determine the movement posture of the mobile phone 100.
  • the angular velocity of the mobile phone 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the mobile phone 100 uses the air pressure value measured by the air pressure sensor 180C to calculate the altitude to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the mobile phone 100 can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the mobile phone 100 when the mobile phone 100 is a flip phone, the mobile phone 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the holster or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the mobile phone 100 in various directions (generally three axes). When the mobile phone 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and apply to applications such as horizontal and vertical screen switching, pedometers and so on.
  • the mobile phone 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the mobile phone 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone 100 emits infrared light to the outside through the light emitting diode.
  • the mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100.
  • the mobile phone 100 can determine that there is no object near the mobile phone 100.
  • the mobile phone 100 can use the proximity light sensor 180G to detect that the user holds the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the mobile phone 100 can adaptively adjust the brightness of the display 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 180J is used to detect temperature.
  • the mobile phone 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the mobile phone 100 performs a reduction in the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the mobile phone 100 when the temperature is lower than another threshold, the mobile phone 100 heats the battery 142 to avoid abnormal shutdown of the mobile phone 100 due to low temperature.
  • the mobile phone 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the mobile phone 100 can receive key input, and generate key signal input related to user settings and function control of the mobile phone 100.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback. For example, touch operations that act on different applications (such as photographing, audio playback, etc.) can correspond to different vibration feedback effects.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195.
  • the components shown in Figure 2 do not constitute a specific limitation on the mobile phone.
  • the mobile phone may also include more or fewer components than those shown in the figure, or combine some components, or split some components, or different The layout of the components.
  • the mobile phone 100 shown in FIG. 2 is taken as an example for introduction.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the hardware structure of the mobile phone 100 is described above, and the software structure of the mobile phone 100 is described below.
  • the software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an android system with a layered architecture as an example to illustrate the software structure of the mobile phone 100.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom are the application (Applications) layer (referred to as the “application layer” in this article), and the application framework (Application Framework) Layer (referred to as “framework layer” in this article), Android runtime and system library layer (referred to as “system runtime library layer” in this article), and kernel layer.
  • application Applications
  • Application Framework Application Framework Layer
  • Android runtime and system library layer referred to as “system runtime library layer” in this article
  • kernel layer the kernel layer.
  • the application program layer there is at least one application program running in the application program layer.
  • These applications can be Window programs, system setting programs, contact programs, SMS programs, clock programs, camera applications, etc., which are included in the operating system; or Applications developed by third-party developers, such as instant messaging programs, photo beautification programs, game programs, etc.
  • the application package in the application layer is not limited to the above examples, and may actually include other application packages, which is not limited in the embodiment of the present invention.
  • the view tree (ViewTree) in the application layer is the view structure in the application interface.
  • one display interface in an application can correspond to one ViewTree.
  • the developer can mark View controls (such as Buttons, ImageViews, etc.) that respond to drag and drop to form a split screen in the ViewTree corresponding to a display interface of an application, for example, The WeChat interface view that can be dragged in Figure 13 is the marked View control.
  • the system user interface (system user interface, SystemUI) is a system-level UI component that has system-level global permissions.
  • SystemUI includes a drag starter (DragStarter). DragStarter is used to process the response logic of the user's drag gesture, and which decision is made The orientation (up, down, left, right) starts a new split screen window.
  • the framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer is equivalent to a processing center, which decides to let applications in the application layer take actions.
  • an open API can be added to the framework layer for upper-level applications to call. For example, add the function setLaunchSplitScreenLocation(int location) to the original activity option (a parameter) of the unified control center of the split screen window.
  • add the function setLaunchSplitScreenLocation(int location) to the original activity option (a parameter) of the unified control center of the split screen window.
  • support left-LEFT, right-RIGHT, up-UP, and down-BOTTOM.
  • the application framework layer in the embodiment of the present invention includes activity manager service (AMS) module and window manager service (WMS) module, split-screen window unified control center (MultiWinSupervisor), input management (InputMangerService, IMS) and so on.
  • AMS activity manager service
  • WMS window manager service
  • MultiWinSupervisor split-screen window unified control center
  • IMS input management
  • Input management is used for input monitoring and processing of touch events.
  • the unified control center for split-screen windows is used to: coordinate the scheduling of split-screen windows, respond upward to the call instructions of the application layer, and issue call instructions of system services (such as AMS, WMS, etc.) to the lower layers. For example, when it is detected that the marked View control is dragged, it triggers the execution of related instructions for the split screen to AMS, WMS, etc., so as to achieve passive triggering of the split screen. For another example, when the upper-layer application actively calls the ActivityOption interface, it triggers the execution of related instructions for the split screen to AMS, WMS, etc., so as to achieve the initiative to trigger the split screen.
  • the activity management module is used to: manage the life cycle of each application and the usual navigation back functions, such as controlling the exit, opening, and back of the application.
  • the window management module is used to manage and draw all window interfaces, such as controlling the display size, position, and level of the window.
  • the framework layer may also include functional services, such as content provision, telephone management, resource management, notification management, etc., which are not limited in the embodiment of the present invention.
  • the system runtime layer provides support for the upper layer, that is, the framework layer.
  • the Android operating system will run the C/C++ library contained in the system runtime layer to realize the functions of the framework layer.
  • the kernel layer is the layer between hardware and software. As shown in Figure 3, the kernel layer contains at least the touch screen driver. Of course, during specific implementation, the kernel layer may also include other drivers, such as camera drivers, audio drivers, etc., which are not limited in the embodiment of the present invention.
  • the split-screen display method provided by the embodiment of the present invention includes:
  • the mobile phone displays the display interface of the first task on the display screen, and receives the first operation to start the second task; wherein, the display corresponding to the second task The interface is different from the display interface corresponding to the first task.
  • the mobile phone In response to the first operation, the mobile phone starts a split-screen display mode, divides the screen into multiple display windows, displays a display interface corresponding to the first task in the first display window, starts the second task and The display interface corresponding to the second task is displayed in the second display window.
  • the display screen may display the display interface corresponding to the first task in full screen (also referred to as the first display interface herein, for example, see (A) in FIG. 5) ,
  • the first task is to run and display the product browsing interface in the Taobao application.
  • the first operation may be an input operation performed by the user, and the type of the input operation may be touch input, input, somatosensory input, hovering input, etc., which is not limited here.
  • the second task may be an associated task triggered during the execution of the task by the first application, such as a WeChat sharing task triggered in Taobao, or a task triggered separately by the user, for example, the user inputs the voice command "open when browsing Taobao" "WeChat", the embodiment of the present invention does not restrict it here.
  • the second task is mainly an associated task triggered during the execution of the task by the first application as an example for detailed introduction.
  • the touch panel in the display screen detects the signal brought by the touch operation input by the user, and the sensor converts the detected signal into
  • the information that can be processed by the processor is passed to the processor, and the kernel layer running in the processor generates position data corresponding to the operation based on this information (specifically, it can include contact coordinates, timestamps corresponding to the contact coordinates, etc.), and the kernel layer will collect
  • the obtained finger position data is uploaded to the framework layer, and the IMS in the framework layer determines the first operation performed by the user (for example, the operation performed by the user is the control shown in Figure 5(A) by clicking on the "Go to WeChat and paste Amoy password" control
  • the intention is to start the second task (that is, to start WeChat), and IMS reports the event that starts the second task to the application in the application layer (that is, Taobao), and the application responds to the user's operation to start the task of opening We
  • the number of divided windows can be two.
  • the display screen is divided into two display windows arranged left and right.
  • the left side is the first display window, which is used to display the first display window.
  • the display interface (Taobao product browsing page) corresponding to the task, and the second display window on the right is used to display the display interface (WeChat page) corresponding to the second task.
  • the positions of the first display window and the second display window can be exchanged.
  • the second display window on the left is used to display the display interface corresponding to the second task
  • the right is the first display window, which is used to display the second task.
  • the Taobao interface and WeChat interface in the drawings are only used to illustrate the first task and the second task. In specific implementation, the first task and the second task may also be tasks in other applications.
  • first display window and the second display window may not only be the left and right arrangement as shown in FIG. 5, but also may be the top and bottom arrangement.
  • the display screen is divided into upper and lower windows, with the first display window on the top and the second display window on the bottom.
  • the positions of the first display window and the second display window in FIG. 6 can also be interchanged.
  • the shape and size of the first display window and the second display window may be the same, as shown in FIG. 5 and FIG. 6, for example.
  • the shape and size of the first display window and the second display window can also be different.
  • the second display window is smaller than the area of the first display window, so that users can view and use the first display window more conveniently.
  • the first display interface may be the same, as shown in FIG. 5 and FIG. 6, for example.
  • the shape and size of the first display window and the second display window can also be different.
  • the second display window is smaller than the area of the first display window, so that users can view and use the first display window more conveniently.
  • the user may set the positional relationship and size relationship of the first display window and the second display window after the mobile phone enters the split-screen mode in the system in advance.
  • the number of divided windows can also be greater than two.
  • the display screen is divided into three display windows, where the third display window is used to display the system desktop, so that the user can also monitor and operate the desktop at the same time.
  • the third display window can also be an interface that displays other applications.
  • the third display window displays the interface of the video APP "iqiyi", so that users can use "iqiyi" and "iqiyi” at the same time. "Taobao" and "WeChat”.
  • first display window, the second display window, and the third display window in the embodiment of the present invention can also be implemented in other ways, for example, it can also be arranged up and down, or in other combination arrangements, which are not limited here.
  • shape or size relationship of the first display window, the second display window, and the third display window may be the same or different, and there is no limitation here.
  • the second task and the first task may be tasks in different applications, that is, cross-application split-screen display scenes.
  • the application corresponding to the first task is Taobao
  • the application corresponding to the second task is WeChat.
  • the first task and the second task may be tasks in the same application, that is, a scene where the application is displayed on a split screen.
  • a split-screen startup function can be added to the first application.
  • a split-screen display menu option can be added to the control used to trigger the second task, which will then trigger the function and trigger of the second task.
  • the functions of the split-screen mode are bound, so that the user can click the split-screen display menu option to trigger the simultaneous split-screen display mode and trigger the second task.
  • the display screen may also display the display interface corresponding to the first task in a non-full screen.
  • the display screen is already in split-screen mode, and two or two are already displayed. If there are more than one display window, then after the mobile phone receives the first operation, in a possible design, the mobile phone can further divide the display area corresponding to the first task before receiving the first operation, and then set the The display area is further divided into screens to display the interface of the first task and the interface of the second task. For example, referring to (A) in Figure 11, the mobile phone display screen is divided into a fourth display window and a fifth display window. The fourth display window displays the desktop, and the fifth display window displays the Taobao interface.
  • the mobile phone After the user clicks "Go to WeChat to paste to a friend" in the fifth display window (the first operation), the mobile phone further divides the area where the fifth display window is located into the first display window and the second display window, and then the second display The WeChat interface is displayed in the window.
  • the mobile phone may also adjust the fourth display window, such as adaptively adjusting the size or position of the fourth display window.
  • the mobile phone will receive the first operation after .
  • the display interface of the second task can be displayed in other display windows except the display window corresponding to the first task, or the entire display screen can be divided into screens again.
  • the mobile phone display screen is divided into a fourth display window and a fifth display window.
  • the fourth display window displays the desktop
  • the fifth display window displays the Taobao interface
  • the user displays the fifth display window.
  • the mobile phone After clicking "Go to WeChat and Paste to a Friend" in the window (the first operation), the mobile phone replaces the desktop displayed in the fourth display window with the display interface corresponding to the second task, that is, the WeChat interface (or replaces the fourth display window) Is the second display window, and the second display window is the window displaying the WeChat interface).
  • the mobile phone after replacing the desktop displayed in the fourth display window with the display interface corresponding to the second task, that is, the WeChat interface, the mobile phone can also display the fourth display window and the fifth display window.
  • the display window is adjusted, for example, the size or position of the fourth display window and the fifth display window are adjusted adaptively.
  • the embodiment of the present invention also provides a solution in which the user controls the display position of the split-screen window. Specifically, after the mobile phone receives the first operation and starts the split-screen display mode, it first pops up the view associated with the second task on the first display interface (for example, the thumbnail of the second display interface). At this time, the user can Perform a drag operation on the view, move the view from the screen to the up/down/left/right of the screen, and determine the position of the split screen window (second display window) according to the direction of the view movement.
  • the user controls the display position of the split-screen window. Specifically, after the mobile phone receives the first operation and starts the split-screen display mode, it first pops up the view associated with the second task on the first display interface (for example, the thumbnail of the second display interface). At this time, the user can Perform a drag operation on the view, move the view from the screen to the up/down/left/right of the screen, and determine the position of the split screen window (second display window) according to the direction of the
  • the touch panel in the display screen detects the signal brought by the touch operation input by the user, and the sensor converts the detected signal into information that can be processed by the processor and transmits it to the processor.
  • the core running in the processor The layer generates the position data corresponding to the operation based on this information (specifically, it can include the contact coordinates, the time stamp corresponding to the contact coordinates, etc.), the kernel layer uploads the collected finger position data to the framework layer, and the IMS in the framework layer determines the operation For a preset gesture operation (such as dragging), the gesture is reported to the DragStarter in the SystemUI in the application layer.
  • DragStarter processes the response logic of the user's drag gesture and decides which direction (up, down, left, right) to start New split-screen window, and issue instructions to the unified control center of split-screen window to control the split-screen display, and then the split-screen display can be passively triggered based on user operations.
  • the application developer of the first application can specify the view (View) to be bound to the split screen event through the layout (Layout) configuration file or API call.
  • the mobile phone determines that the second task needs to be started after the first operation is received.
  • the user can drag it to the specified position (support: left-LEFT, right-RIGHT, up-UP, and down-BOTTOM), and the phone is formed in the specified position Split screen window (second display window), and display in the split screen display window.
  • the mobile phone forms a split-screen window in a designated orientation, it may be after receiving the user's operation to click on the view, and then display the display interface of the second task in the split-screen display window.
  • the mobile phone determines where to start the split-screen judgment algorithm:
  • the mobile phone responds to the first operation and starts the split screen mode.
  • a thumbnail pops up on the Taobao interface, as shown in Figure 13(A).
  • the user can drag and drop the thumbnail, such as dragging to the right as shown in (B) in Figure 13, or dragging down as shown in (C) in Figure 13, or as shown in Figure 13 Drag upward as shown in (D), or drag to the left as shown in (E) in Figure 13, and so on.
  • the mobile phone determines the position of the split-screen window according to the moving direction of the view.
  • the display screen when the moving direction is downward, the display screen is divided into two windows arranged up and down, and the second task (WeChat) is displayed.
  • the second display window of) is on the bottom, and the first display window for displaying the first task (Taobao) is on the top.
  • the positions of the split-screen display windows of other drag methods are also similar, and will not be repeated here.
  • a copy of the view (shadow) that can be dragged by the user is generated.
  • the user can The operation of dragging the view copy to the specified position is performed to realize the effect of displaying the second task on a split screen. If the user clicks on the view after the display screen displays the view, the mobile phone will open the new page normally (that is, exit the interface of the first task, and display the interface of the second task in a full-screen overlay). In this way, after the user performs the first operation, the user can choose whether to perform split-screen display according to requirements, which can improve user experience.
  • the initial display position of the view may also be at the upper/lower/left/right edge position of the first display interface, for example, as shown in FIG. 14. In this way, the occlusion of the main display content in the first display interface by the view can be further reduced.
  • the display windows of the entire display screen can be re-divided according to the user's drag operation, and the first display windows are displayed in the two newly divided display windows.
  • the display interface of the first task or the display interface that received the user input operation for the last time, namely Taobao
  • the display interface corresponding to the second task are as shown in FIG. 15.
  • the effect of flexibly updating the split-screen display window according to user needs can be achieved, and user experience can be further improved.
  • the area of the view does not exceed a set threshold area, for example, does not exceed one-third, one-fifth, etc. of the area of the first display interface. In this way, the occlusion of the content of the first display interface by the view can be reduced, and the user experience can be improved.
  • the view may be displayed in a semi-transparent manner, which can further reduce the occlusion of the main display content in the first display interface by the view and improve the visual effect.
  • the method provided by the embodiment of the present invention is introduced from the perspective of the electronic device (mobile phone 100) as the execution subject.
  • the terminal device may include a hardware structure and/or a software module, and realize the above-mentioned functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether a certain function of the above-mentioned functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • an embodiment of the present invention also provides an electronic device 1600 for executing the steps of the split-screen display method in the foregoing embodiment of the present invention.
  • the electronic device 1600 includes: a display screen 1601; one or more processors 1602; a memory 1603; a plurality of applications; and one or more computer programs; wherein the one or more computer programs are stored in the In the memory, the one or more computer programs include instructions, and when the instructions are invoked and executed by the one or more processors, the electronic device realizes the above-mentioned split-screen display method in the embodiment of the present invention.
  • the processor 1602 may be a general-purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA), or other Programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Programmable logic devices discrete gates or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in combination with the embodiments of the present invention may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in random access memory (RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory, or electrically erasable programmable memory, registers, etc. mature in the field Storage medium.
  • RAM random access memory
  • flash memory read-only memory
  • read-only memory read-only memory
  • ROM programmable read-only memory
  • electrically erasable programmable memory registers, etc. mature in the field Storage medium.
  • the storage medium is located in the memory, and the processor reads the instructions in the memory and completes the steps of the above method in combination with its hardware.
  • the processor 1601 may be 110
  • the display screen 1601 may be the display screen 194
  • the memory 1603 may be the internal memory 121.
  • an embodiment of the present invention also provides an electronic device, which includes modules/units that execute the above-mentioned split-screen display method of the embodiments of the present invention; these modules/units can be implemented by hardware or The corresponding software is implemented by hardware.
  • an embodiment of the present invention also provides a computer storage medium, including computer instructions, which when the computer instructions run on an electronic device, cause the electronic device to execute the above-mentioned split-screen display in the embodiment of the present invention. method.
  • the embodiment of the present invention also provides a program product, which when the program product runs on a computer, causes the computer to execute the above-mentioned split-screen display method in the embodiment of the present invention.
  • an embodiment of the present invention also provides a chip, which is coupled with a memory in an electronic device, and is used to call a computer program stored in the memory and execute the above-mentioned split-screen display method in the embodiment of the present invention;
  • “coupled” means that two components are directly or indirectly combined with each other.
  • the embodiments of the present invention also provide a graphical user interface on an electronic device, the electronic device having a display screen, one or more memories, and one or more processors, the one or more Each processor is configured to execute one or more computer programs stored in the one or more memories, and the graphical user interface includes the electronic device to execute the above-mentioned split-screen display method in the embodiment of the present invention.
  • the term “when” can be interpreted as meaning “if" or “after” or “in response to determining" or “in response to detecting".
  • the phrase “when determining" or “if detected (statement or event)” can be interpreted as meaning “if determined" or “in response to determining" or “when detected (Condition or event stated)” or “in response to detection of (condition or event stated)”.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium, (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state hard disk).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

本发明实施例提供一种分屏显示方法与电子设备,所述方法包括:电子设备在运行第一应用执行第一任务时,在显示屏上显示所述第一任务对应的显示界面;在所述第一任务对应的显示界面上接收用于启动第二任务的第一操作;响应于所述第一操作,启动分屏显示模式。与现有技术相比,本发明实施例无需用户执行额外的手动操作就可以触发分屏模式,实现了基于任务触发电子设备进入分屏显示的技术效果,提高了电子设备的智能性。

Description

一种分屏显示方法与电子设备
相关申请的交叉引用
本申请要求在2019年09月30日提交中国专利局、申请号为201910938898.X、申请名称为“一种分屏显示方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种分屏显示方法与电子设备。
背景技术
为了提升视觉体验,各种电子设备的屏幕越来越大。电子设备提供分屏模式,以满足用户同时操作多个应用的需求。例如,电子设备可以同时显示两个应用的窗口。
现有技术中电子设备进入分屏模式需要用户手动触发。例如:电子设备全屏显示应用A的窗口时,用户可长按历史任务键,电子设备进入分屏模式,然后用户手动选择需要在分屏窗口中进行显示的应用,例如点击应用B的图标,最终使得应用A和应用B分别在不同窗口中显示。
可见,现有技术中电子设备进入分屏模式需要用户手动触发,操作过程较为繁琐,智能性不高。
发明内容
本发明实施例提供一种分屏显示方法与电子设备,用于实现通过任务触发电子设备启动分屏显示模式,提高电子设备的智能性的技术效果。
第一方面,本发明实施提供一种分屏显示方法,所述方法包括:在运行第一应用执行第一任务时,在显示屏上显示所述第一任务对应的显示界面;在所述第一任务对应的显示界面上接收用于启动第二任务的第一操作;响应于所述第一操作,启动分屏显示模式。也就是说,电子设备在运行第一应用执行第一任务时,如果在第一任务对应的显示界面上接收用于启动第二任务的第一操作,则可以通过第二任务来触发电子设备进入分屏显示模式,与现有技术相比,无需用户执行额外的手动操作就可以触发分屏模式,实现了基于任务触发电子设备进行分屏显示的技术效果,提高了电子设备的智能性。
在一种可能的设计中,在运行第一应用执行第一任务时,电子设备可以在所述显示屏上全屏显示所述第一任务对应的显示界面。这样,可以实现在全屏显示模式下通过任务发出分屏显示的效果,无需用户额外执行手动操作就可以将全屏模式切换为分屏模式,提高了交互的智能性。
一种可能的设计中,第二任务可以为第二应用中的任务,且所述第二应用与所述第一应用不同。也就是说,本发明实施例技术方案可以适用于跨应用分屏的场景,提高了提用户体验。
另一种可能的设计中,所述第二任务可以为所述第一应用中的任务。也就是说,本发 明实施例技术方案还可以适用于应用内分屏的场景,可进一步提高分屏显示的智能性,提高用户体验。
在一种可能的设计中,电子设备在响应于所述第一操作,启动分屏显示模式时,具体可以是在显示屏上生成至少两个显示窗口,所述至少两个显示窗口包括第一显示窗口和第二显示窗口;在所述第一显示窗口中显示所述第一任务对应的显示界面,在所述第二显示窗口中显示所述第二任务对应的显示界面。也就是说,在第二任务被触发时,电子设备可以直接进入分屏模式,并将第二任务的显示界面显示在分屏窗口中,提高了电子设备的交互效率。
在一种可能的设计中,电子设备在响应于所述第一操作,启动分屏显示模式时,还可以是生成所述第二任务对应的显示界面的视图,并在所述第一任务对应的显示界面上显示所述视图;响应于针对所述视图的第二操作,根据所述第二操作确定分屏显示窗口位置;在所述分屏显示窗口位置上显示分屏显示窗口,并在所述分屏显示窗口中显示所述第二任务对应的显示界面。也就是说,在基于第一操作(启动第二任务)触发分屏模式开启后,还可以基于接收到的第二操作确定分屏窗口的显示位置,也即用户可以通过执行第二操作指示分屏窗口的显示位置,可以进一步提高电子设备交互的智能性,提高用户体验。
在一种可能的设计中,所述第二操作可以包括多个子操作步骤,以避免分屏显示被误触发,提高分屏显示的可靠性。
示例性地,第二操作可以包括第一子操作,所述第一子操作可以为将所述视图或者所述视图的副本拖动至预设方位(例如上、下、左右等)的操作。相应的,电子设备确定所述预设方位为分屏显示窗口位置,并在接收到所述第一子操作之后,在所述分屏显示窗口位置上显示分屏显示窗口。
进一步的,第二操作还可以包括在所述第一子操作之前的第二子操作,所述第二子操作可以为对所述视图进行单指长按或者双指长按的操作。也就是说,用户需要单指长按或者双指长按视图后才可以拖拽视图,这样可以避免用户误触发拖动视图,提高分屏显示的可靠性。此外,如果用户是单击视图,则还可以撤销进入分屏模式,恢复全屏显示第一任务的显示界面。
进一步的,所述第二操作还可以包括在所述第一子操作之后的第三子操作,所述第三子操作可以为单击所述视图的操作。相应的,电子设备在接收到所述第三子操作之后,才在所述分屏显示窗口中显示所述第二任务对应的显示界面。也就是说,用户在拖拽视图至指定方案方位后,还需要执行单击所述视图的操作确定在分屏窗口中显示第二任务的显示界面,这样可以更好地避免用户误触发拖动视图,提高分屏显示的可靠性。
第二方面,本发明实施例提供一种电子设备,包括显示屏;一个或多个处理器;存储器;多个应用;以及一个或多个计算机程序;其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:在运行第一应用执行第一任务时,在所述显示屏上显示所述第一任务对应的显示界面;在所述第一任务对应的显示界面上接收用于启动第二任务的第一操作;响应于所述第一操作,启动分屏显示模式。
在一种可能的设计中,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:在所述显示屏上全屏显示所述第一任务对应的显示界面。
在一种可能的设计中,所述第二任务为所述第一应用中的任务;或者所述第二任务为 第二应用中的任务,且所述第二应用与所述第一应用不同。
在一种可能的设计中,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:在所述显示屏上生成至少两个显示窗口,所述至少两个显示窗口包括第一显示窗口和第二显示窗口;在所述第一显示窗口中显示所述第一任务对应的显示界面,在所述第二显示窗口中显示所述第二任务对应的显示界面。
在一种可能的设计中,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:生成所述第二任务对应的显示界面的视图,并在所述第一任务对应的显示界面上显示所述视图;响应于针对所述视图的第二操作,根据所述第二操作确定分屏显示窗口位置;在所述分屏显示窗口位置上显示分屏显示窗口,并在所述分屏显示窗口中显示所述第二任务对应的显示界面。
在一种可能的设计中,所述第二操作包括第一子操作,所述第一子操作为将所述视图或者所述视图的副本拖动至预设方位的操作;当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:确定所述预设方位为分屏显示窗口位置;当所述指令被所述一个或多个处理器调用执行时,还使得所述电子设备执行以下步骤:在接收到所述第一子操作之后,在所述分屏显示窗口位置上显示分屏显示窗口。
在一种可能的设计中,所述第二操作还包括在所述第一子操作之前的第二子操作,所述第二子操作为对所述视图进行单指长按或者双指长按的操作。
在一种可能的设计中,所述第二操作还包括在所述第一子操作之后的第三子操作,所述第三子操作为单击所述视图的操作;当所述指令被所述一个或多个处理器调用执行时,还使得所述电子设备执行以下步骤:在接收到所述第三子操作之后,在所述分屏显示窗口中显示所述第二任务对应的显示界面。
第三方面,本发明实施例提供一种电子设备,所述电子设备包括执行上述第一方面或者第一方面的任意一种可能的设计的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第四方面,本发明实施例提供一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如本发明实施例第一方面或第一方面的任一种可能的设计中所述的分屏显示方法。
第五方面,本发明实施例提供一种程序产品,当所述程序产品在计算机上运行时,使得所述计算机执行如本发明实施例第一方面或第一方面的任一种可能的设计中所述的分屏显示方法。
第六方面,本发明实施例提供一种芯片,所述芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本发明实施例第一方面及其第一方面任一可能设计的技术方案;本发明实施例中“耦合”是指两个部件彼此直接或间接地结合。
第七方面,本发明实施例提供一种电子设备上的图形用户界面,所述电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如本发明实施例第一方面或第一方面的任一种可能的设计中所述的分屏显示方法时显示的图形用户界面。
附图说明
图1为现有技术中触发电子设备进入分屏模式的示意图;
图2为本发明实施中一种电子设备的硬件结构示意图;
图3为本发明实施中一种电子设备的软件结构示意图;
图4为本发明实施例中一种分屏显示方案的流程图;
图5为本发明实施例中一种分屏显示的方案的示意图;
图6为本发明实施例中另一种分屏显示的方案的示意图;
图7为本发明实施例中另一种分屏显示的方案的示意图;
图8A为本发明实施例中另一种分屏显示的方案的示意图;
图8B为本发明实施例中另一种分屏显示的方案的示意图;
图9为本发明实施例中另一种分屏显示的方案的示意图;
图10为本发明实施中另一种分屏显示的方案的示意图;
图11为本发明实施例中另一种分屏显示的方案的示意图;
图12为本发明实施例中另一种分屏显示的方案的示意图;
图13为本发明实施例中另一种分屏显示的方案的示意图;
图14为本发明实施例中另一种分屏显示的方案的示意图;
图15为本发明实施例中另一种分屏显示的方案的示意图;
图16为本发明实施例中另一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整的描述。
首先,对本发明中的部分用语进行解释说明,以便于本领域技术人员理解。
1)、本发明实施例涉及的应用程序(application,简称app),简称应用,为能够实现某项或多项特定功能的软件程序。通常,终端设备中可以安装多个应用,例如,即时通讯类应用、视频类应用、音频类应用、图像拍摄类应用等等。其中,即时通信类应用,例如可以包括短信应用、微信(WeChat)、WhatsApp Messenger、连我(Line)、照片分享(Instagram)、Kakao Talk、钉钉等。图像拍摄类应用,例如可以包括相机应用(***相机或第三方相机应用)。视频类应用,例如可以包括Youtube、Twitter、抖音、爱奇艺,腾讯视频等等。音频类应用,例如可以包括Google Music、酷狗音乐、虾米、QQ音乐等等。以下实施例中提到的应用,可以是终端设备出厂时已安装的应用,也可以是用户在使用终端设备的过程中从网络下载或其他终端设备获取的应用。
2)本发明实施例涉及的分屏,是指软件***将一块物理屏幕,划分成若干显示区域,可以同时显示多个应用页面的技术。
3)、本发明实施例涉及的显示窗口,在本文中亦称“应用窗口”、“窗口”等,是指一个显示区域,该显示区域内显示一个应用的显示界面。一个应用可以对应一个应用窗口。一个应用窗口可以被缩小或放大,以使的该应用窗口内的显示界面缩小或放大。
电子设备的显示屏可以只显示一个窗口。当显示屏只显示一个窗口时,该窗口可以在显示屏上全屏显示,也可以非全屏显示,本发明实施例不做限制。在全屏显示时,该窗口 占满显示屏的全部有效显示区域(或该窗口占用***允许的窗口能占用的最大显示区域)。在非全屏显示时,该窗口只占用显示屏的部分有效显示区域(或该窗口占用的显示区域小于***允许的窗口能占用的最大显示区域),比如手机在单手操作模式下,窗口占用的显示区域小于***允许的窗口能占用的最大显示区域。
电子设备的显示屏上也可以同时显示多个窗口,例如,手机、平板电脑等电子设备的分屏模式,显示屏的显示区域被划分为划分成若干显示区域,一个显示区域即为一个显示窗口,不同的显示窗口中可以显示不同应用程序的用户界面。
本发明实施例涉及的分屏窗口或者分屏显示窗口,是指触发分屏显示模式后新出现的显示界面。例如电子设备一开始是全屏显示应用A,即显示屏只有一个显示应用A的全屏窗口,触发电子设备进入分屏显示模式后,电子设备显示屏被划分为两个显示区域,分别显示应用A和应用B,那么应用B对应的显示区域就叫做分屏窗口。
4)本发明实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。
另外,需要理解的是,在本发明的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本发明的限制。如在本发明的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本发明实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本发明的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
下面,介绍本发明的背景技术。
请参见图1,为现有技术中电子设备进入分屏模式的一种示例。用户在浏览“淘宝”时,意图将某件商品分享给微信好友,如图1中的(A)所示,用户复制了该件商品的淘口令,意图发送给微信好友,但是又不想退出当前的显示界面,则可以手动启动分屏模式的方法,在分屏窗口中打开微信界面执行分享操作。
例如,如图1中的(B)所示,用户可以通过长按历史任务键可触发电子设备进入分屏模式,进入分屏模式后的显示状态如图1中的(C)所示,显示屏幕从一个显示窗口划分为两个显示窗口,淘宝在左边的窗口中显示,右边窗口(即分屏窗口)中显示桌面,然后用户在桌面上选择“微信”,使得右边窗口显示微信界面,这样用户就可以在使用“淘宝”浏览商品的同时,使用“微信”和好友聊天,如图1中的(D)所示,但是该过程需要用户手动执行多个操作才能触发分屏模式在两个窗口中分别显示淘宝和微信界面,十分繁琐。另外还有一种启动分屏显示的方法为手势启动分屏,例如,使用指关节在屏幕中央滑动, 可以将屏幕一分为二,手势启动后的显示效果和图1中的(C)类似,之后用户再在分屏窗口(桌面)中选择需要进行分屏显示的应用(如微信),可见通过手势启动分屏显示的过程同样需要用户手动执行较多繁琐操作,并且还要求用户付出较高的学习成本。
通过上述可知,现有技术中电子设备进入分屏模式需要用户手动触发,操作过程较为繁琐。并且,现有技术中的分屏模式只能针对跨应用(即两个不同的应用)进行分屏,例如上述的“淘宝”和“微信”,存在一定功能限制。因此,现有技术中的分屏显示方案存在智能性低的问题。
鉴于此,本发明实施例提供一种分屏显示方法和电子设备。电子设备在运行第一应用执行第一任务的过程中,在显示屏上显示第一任务的显示界面(第一显示界面),当接收到启动第二任务的操作时,直接启动分屏显示模式,将屏幕划分为多个显示窗口,然后在第一显示窗口内显示所述第一显示界面,运行第二任务并在第二显示窗口显示所述第二任务的显示界面(第二显示界面),其中所述第二任务可以是第一应用中的其他任务,也可以是其他应用如第二应用中的任务,这里不做限制。这样,就可以实现通过任务来触发电子设备进入分屏显示模式,分屏显示第二任务的界面的技术效果,与现有技术相比,无需用户手动触发分屏模式,提高了交互的智能性。另外,由于所述第一任务、所述第二任务除了可以分别是两个不同应用程序的任务外(即跨应用分屏),所述第一任务、所述第二任务还可以是同一应用程序内的任务,因此本方案还可以实现应用内分屏的技术效果,可进一步提高分屏显示的智能性,提高用户体验。具体的技术方案将在后文详细介绍。
本发明实施例可以应用在认可具有显示屏的电子设备中,例如,手机、折叠屏手机、平板电脑等便捷式电子设备,也可以是台式电脑、电视机等非便捷式电子设备,还可以是穿戴电子设备,例如手环、手表、穿戴设备头盔等,还可以是车载设备、智能家居设备(例如、电视机)等,本发明实施例不作限定。
以下介绍电子设备、用于这样的电子设备的图形用户界面(graphical user interface,GUI)、和用于使用这样的电子设备的实施例。以下实施例中以电子设备是手机为例,图2示出了手机的结构示意图。如图2所示,手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中 还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。本发明实施例中的分屏显示方法的执行可以由处理器110来控制或调用其他部件来完成,比如调用内部存储器121中存储的本发明实施例的处理程序,或者通过外部存储器接口120调用第三方设备中存储的本发明实施例的处理程序,来控制显示屏194执行分屏显示的操作。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,以及至少一个应用程序(例如爱奇艺应用,微信应用等)的软件代码等。存储数据区可存储手机100使用过程中所产生的数据(例如图像、视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。内部存储器121可以用于存储本发明实施例所提出的分屏显示方法的计算机可执行程序代码,所述可执行程序代码包括指令。处理器110可以通过运行存储在内部存储器121的该分屏显示方法的计算机可执行程序代码,从而使得手机100可以完成本发明实施例提出的分屏显示方法。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为手机100充电,也可以用于手机100与***设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
手机100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area  networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,手机100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得手机100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
显示屏194可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单,另外还可以接受用户输入,比如用户的触摸操作。显示应用的显示界面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
显示屏194可包括显示面板,以及触控面板。其中,显示面板可以采用液晶显示器(liquid crystal display,LCD)、有机发光二极管(organicLight-emitting diode,OLED)等形式来配置显示面板。触控面板,也称为触摸屏、触敏屏等,可收集用户在其上或附近的接触或者非接触操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作,也可以包括体感操作;该操作包括单点控制操作、多点控制操作等操作类型),并根据预先设定的程式驱动相应的连接装置。
可选的,触控面板可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位、姿势,并检测输入操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成处理器能够处理的信息,再送给处理器110,并能接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板,也可以采用未来发展的任何技术实现触控面板。进一步的,触控面板可覆盖显示面板,用户可以根据显示面板显示的内容(该显示内容包括但不限于,软键盘、虚拟鼠标、虚拟按键、图标等等),在显示面板上覆盖的触控面板上或者附近进行操作,触控面板检测到在其上或附近的操作后,传送给处理器110以确定用户输入,随后处理器110根据用户输入在显示面板上提供相应的视觉输出。
例如,在本发明实施例中,触控面板中的触摸检测装置检测到用户输入的触摸操作后,将检测到的触摸操作对应的信号实时发送的触摸控制器,触摸控制器将信号转换成触点坐标发送给处理器110,处理器110根据接收到的触点坐标确定该触摸操作具体为启动第二任务的操作,然后响应用户输入的触摸操作,启动分屏模式,将显示屏194的显示区域划分为多个显示窗口(例如划分为第一显示窗口和第二显示窗口),并启动第二任务,将第二任务放至第二显示窗口中显示,将先前全屏显示的第一任务切换至第一显示窗口中显示。此部分方案的具体实现方式,将在后文进行详细介绍。
在一些实施例中,手机100可以包括1个或N个显示屏194,1个或N个显示屏194可以折叠连接,也可以柔性连接,多个显示屏194折叠时便于电子设备便携,多个显示屏194展开连接时,便于用户使用大屏幕观看提高用户体验,其中,N为大于1的正整数。当电子设备包括多个显示屏时,本发明实施例中的分屏显示方法可以单独适用于一个显示屏,也可以应用于多个显示屏展开时连接形成大屏幕整体。
摄像头193用于捕获静态图像或视频。摄像头193可以包括前置摄像头和后置摄像头。
手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。陀螺仪传感器180B可以用于确定手机100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定手机100围绕三个轴(即,x,y和z轴)的角速度。
陀螺仪传感器180B可以用于拍摄防抖。气压传感器180C用于测量气压。在一些实施例中,手机100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。磁传感器180D包括霍尔传感器。手机100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当手机100是翻盖机时,手机100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。加速度传感器180E可检测手机100在各个方向上(一般为三轴)加速度的大小。当手机100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。手机100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,手机100可以利用距离传感器180F测距以实现快速对焦。接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机100通过发光二极管向外发射红外光。手机100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机100附近有物体。当检测到不充分的反射光时,手机100可以确定手机100附近没有物体。手机100可以利用接近光传感器180G检测用户手持手机100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。手机100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测手机100是否在口袋里,以防误触。指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,手机100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,手机100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,手机100对电池142加热,以避免低温导致手机100异常关机。在其他一些实施例中,当温度低于又一阈值时,手机100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。手机100可以接收按键输入,产生与手机100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
可以理解的是,图2所示的部件并不构成对手机的具体限定,手机还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。以下的实施例中,以图2所示的手机100为例进行介绍。
应理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本发明另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
以上介绍了手机100的硬件结构,下面介绍手机100的软件架构。
具体的,手机100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的安卓(android)***为例,示例性说明手机100的软件结构。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。
请参见图3,在一些可能的实施例中,将Android***分为四层,从上至下分别为应用程序(Applications)层(本文中简称“应用层”),应用程序框架(Application Framework)层(本文中简称“框架层”),安卓运行时(Android runtime)和***库层(本文中简称“***运行库层”),以及内核层。
其中,应用程序层中运行有至少一个应用程序,这些应用程序可以是操作***自带的窗口(Window)程序、***设置程序、联系人程序、短信程序、时钟程序、相机应用等;也可以是第三方开发者所开发的应用程序,比如即时通信程序、相片美化程序、游戏程序 等。当然,在具体实施时,应用程序层中的应用程序包不限于以上举例,实际还可以包括其它应用程序包,本发明实施例对此不做限制。
如图3所示,本发明实施例中,应用层中的视图树(ViewTree)是应用界面中的视图结构,一般而言,一个应用内的一个显示界面可以对应一个ViewTree。在本发明实施中,开发者可以在一个应用的一个显示界面对应的ViewTree中标记响应拖拽形成分屏的视图(View)控件(如按钮(Button)、图片控件(ImageView)等),例如,可以在图13中被拖拽的微信界面视图,即为被标记的View控件。***用户界面(systemuser interface,SystemUI)是***级的UI组件,拥有***级的全局的权限,SystemUI中包括拖拽启动器(DragStarter),DragStarter用于处理用户拖拽手势的响应逻辑,决策在哪个方位(上、下、左、右)启动新的分屏窗口。
框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。应用程序框架层相当于一个处理中心,这个中心决定让应用层中的应用程序做出动作。在本发明实施例中,框架层中可以增加一个开放的API供上层应用调用,例如在分屏窗口统一控制中心原有的活动选项ActivityOption(一个参数)上增加函数setLaunchSplitScreenLocation(int location),参数location为要启动分屏的方位,支持:左-LEFT,右-RIGHT,上-UP,下-BOTTOM四种。当应用启动新的Activity,例如启动本文中所述的第二任务(打开微信)时,调用启动活动startActivity(一个函数),通过将ActivityOption生成数据包(Bundle)传入,这样就可以实现应用主动触发分屏显示。
如图3所示,本发明实施例中应用程序框架层包括活动管理(activity manager service,AMS)模块和窗口管理(window manager service,WMS)模块、分屏窗口统一控制中心(MultiWinSupervisor)、输入管理(InputMangerService,IMS)等。
输入管理用于触摸事件的输入监听与处理。
分屏窗口统一控制中心用于:统筹分屏窗口的调度,向上响应应用层的调用指令,向下层下发***服务(如AMS,WMS等)的调用指令。例如,在检测到被标记的View控件被拖拽时,向AMS,WMS等触发执行分屏的相关指令,以此实现被动触发分屏。又如,在上层应用主动调用ActivityOption接口时,向AMS,WMS等触发执行分屏的相关指令,以此实现主动触发分屏。活动管理模块用于:管理各个应用程序的生命周期以及通常的导航回退功能,比如控制应用程序的退出、打开、后退。
窗口管理模块用于管理与绘制所有的窗口界面,比如控制窗口的显示大小、位置以及层级等。
当然,在具体实施时框架层还可以包括功能服务,比如内容提供,电话管理,资源管理,通知管理等,本发明实施例对此不做限制。
***运行库层为上层即框架层提供支撑,当框架层被使用时,安卓操作***会运行***运行库层中包含的C/C++库以实现框架层要实现的功能。
内核层是硬件和软件之间的层。如图3所示,内核层至少包含触摸屏驱动。当然,在具体实施时内核层还可以包括其他驱动,如摄像头驱动,音频驱动等,本发明实施例对此不做限制。
应理解,图3中的软件架构对应的软件程序和/或模块存储在图2所示的手机100中的内部存储器121中。
下面以本发明实施例应用在上述手机100中为例对本发明实施例提供的方案进行详细介绍。
请参见图4,本发明实施例提供的分屏显示方法,包括:
S401、手机在运行第一应用执行第一任务的过程中,在显示屏上显示所述第一任务的显示界面,接收启动第二任务的第一操作;其中,所述第二任务对应的显示界面和所述第一任务对应的显示界面不同。
S402,手机响应于所述第一操作,启动分屏显示模式,将屏幕划分为多个显示窗口,在第一显示窗口内显示所述第一任务对应的显示界面,启动所述第二任务并在第二显示窗口中显示所述第二任务对应的显示界面。
在本发明实施例中,手机接收到第一操作之前,显示屏可以是全屏显示第一任务对应的显示界面(在本文中又称为第一显示界面,例如,参见图5中的(A),第一任务为运行显示淘宝应用中的商品浏览界面。
所述第一操作可以是用户执行的输入操作,输入操作的类型可以是触摸输入、输入、体感输入、悬浮输入等,这里不做限制。
所述第二任务可以是所述第一应用执行任务过程中触发的关联任务,例如在淘宝中触发微信分享任务,也可以是用户单独触发的任务,例如用户在浏览淘宝时输入语音指令“打开微信”,本发明实施例这里不做限制。在后文中主要以所述第二任务是第一应用执行任务过程中触发的关联任务为例进行详细介绍。
示例性的,参考图3,当显示屏在全屏显示第一任务的显示界面时,显示屏中的触控面板检触到用户输入的触摸操作带来的信号,传感器将检测到的信号转换成处理器能够处理的信息并传递给处理器,处理器中运行的内核层基于该信息生成操作对应的位置数据(具体可以包括触点坐标、触点坐标对应的时间戳等),内核层将采集到的手指位置数据上传至框架层,框架层中IMS的判定用户执行的第一操作(例如,用户执行的操作为图5中的(A)所示的点击“去微信粘贴淘口令”的控件的操作)意图为启动第二任务(即启动微信),IMS则将启动第二任务的事件上报给应用层中的应用(即淘宝),则该应用在响应用户的操作启动打开微信的任务时,还可以主动调用框架层中的API接口,以触发分屏窗口统一控制中心下发相关指令给AMS、WMS等,启动***进入分屏模式,并在分屏窗口中打开第二任务的显示界面,这样可以实现应用基于任务主动触发分屏显示的效果。
一些可能的设计,划分的窗口数量可以是两个,例如图5中的(B)所示,显示屏被划分为左右排列的两个显示窗口,左边为第一显示窗口,用于显示第一任务对应的显示界面(淘宝商品浏览页面),右边为第二显示窗口,用于显示第二任务对应的显示界面(微信页面)。当然,具体实施时,第一显示窗口和第二显示窗口的位置可以相互交换,例如左边为第二显示窗口用于显示第二任务对应的显示界面,右边为第一显示窗口,用于显示第一任务对应的显示界面。应理解,附图中的淘宝界面和微信界面仅仅是用于对第一任务和第二任务进行举例,在具体实施时,第一任务和第二任务还可以是其他应用程序中的任务。
需要说明的是,第一显示窗口和第二显示窗口的位置关系,除了可以是图5所示的左右排列外,还可以是上下排列。例如图6所示,显示屏被划为分上下两个窗口,第一显示窗口在上,第二显示窗口在下。当然,图6中第一显示窗口和第二显示窗口的位置也可以互换。
在本发明实施例中,第一显示窗口和第二显示窗口的形状和大小可以相同,例如图5 和图6所示。当然,第一显示窗口和第二显示窗口的形状和大小也可以不同,例如图7所示,第二显示窗口小于第一显示窗口的面积,这样用户可以更加方便查看和使用第一显示窗口中的第一显示界面。
作为一种可选的实施方式,用户可以预先在***中设置手机进入分屏模式后第一显示窗口和第二显示窗口的位置关系和大小关系。
一些可能的设计中,划分的窗口数量还可以大于两个。例如,参见图8A,显示屏被划为三个显示窗口,其中第三显示窗口用于显示***桌面,这样用户还可以同时对桌面进行监控和操作。当然,第三显示窗口还可以是显示其他应用的界面,例如,参见图8B,第三显示窗口中显示的是视频APP“爱奇艺”的界面,这样用户可以同时使用“爱奇艺”、“淘宝”和“微信”。应理解,本发明实施例第一显示窗口、第二显示窗口以及第三显示窗口的位置关系还可以有其它实现方式,例如还可以是上下排列,或者其他组合排列方式,这里不做限制。第一显示窗口、第二显示窗口以及第三显示窗口的形状或大小关系可以是相同,也可以是不同,这里不做限制。
一些可能的设计中,第二任务和第一任务可以是不同应用中的任务,即跨应用分屏显示场景。例如图5~图8B所示的实施例,第一任务对应的应用为淘宝,而第二任务对应的应用为微信。另一些可能的设计中,第一任务和第二任务可以是同一应用程序内的任务,即应用内分屏显示的场景。
例如,参见图9中的(A),用户在使用淘宝应用在浏览商品时,意图和客服聊天,则点击商品浏览页面下方的“客服”图标(第一操作),手机检测到该操作后,直接启动分屏模式,将显示屏划分为多个显示窗口,在第一显示窗口中显示先前的商品浏览页面,同时启动并运行客服任务,在第二显示窗口中显示客服页面,这样用户就可以在浏览商品的同时与客服聊天。
作为一种可选的实施方式,可以在第一应用中增加分屏启动功能,例如在用于触发第二任务的控件中增加分屏显示的菜单选项,进而将触发第二任务的功能和触发分屏模式的功能进行绑定,这样用户点击该分屏显示菜单选项就可以实现对同时分屏显示模式的触发和对第二任务的触发。例如,参见图10,以第一应用为微信、且第一界面为微信聊天管理界面为例,可以在任意聊天对象的右键菜单中增加“分屏窗口打开”的选项,这样,用户通过长按某个聊天对象调出对应的右键菜单后,点击“分屏窗口打开”的选项(第一操作),就可以触发手机进入分屏显示模式,并在分屏窗口中显示与该聊天对象的聊天界面。这样,可以给出用户可以分屏显示的提示,进一步提升用户体验。
作为一种可选的实施方式,手机接收到第一操作之前,显示屏也可以是非全屏显示第一任务对应的显示界面,比如显示屏上已经处于分屏模式下,已经显示有两个或者两个以上的显示窗口,那么手机在接收到第一操作之后,在一种可能的设计中,手机可以对接收到第一操作之前所述第一任务对应的显示区域进行进一步的划分,然后在该显示区域中进一步分屏显示第一任务的界面和第二任务的界面。例如,请参见图11中的(A),手机显示屏被划分为第四显示窗口和第五显示窗口,其中第四显示窗口中显示的是桌面,第五显示窗口中显示的是淘宝界面,用户在第五显示窗口中点击“去微信粘贴给好友”(第一操作)后,手机进一步将第五显示窗口所在的区域进一步划分成第一显示窗口和第二显示窗口,然后在第二显示窗口中显示微信界面。作为一种可选的实施方式,在具体实施时,手机还可以对第四显示窗口进行调整,比如适应的调整第四显示窗口的大小或者位置等。
在一种可替换的设计中,如果手机接收到第一操作之前,如果手机已经处于分屏模式下,即已经显示有两个或者两个以上的显示窗口,那么手机在接收到第一操作之后,可以在除了所述第一任务对应的显示窗口外的其他显示窗口中显示第二任务的显示界面,或者对整个显示屏重新进行分屏划分。例如,请参见图12,手机显示屏被划分为第四显示窗口和第五显示窗口,其中第四显示窗口中显示的是桌面,第五显示窗口中显示的是淘宝界面,用户在第五显示窗口中点击“去微信粘贴给好友”(第一操作)后,手机将第四显示窗中显示的桌面替换显示为第二任务对应的显示界面即微信界面(或者说是将第四显示窗口替换为第二显示窗口,第二显示窗口为显示微信界面的窗口)。作为一种可选的实施方式,在具体实施时,将第四显示窗中显示的桌面替换显示为第二任务对应的显示界面即微信界面后,手机在还可以对第四显示窗口和第五显示窗口进行调整,比如适应的调整第四显示窗口和第五显示窗口的大小或者位置等。
为了进一步提高用户体验,本发明实施例还提供由用户控制分屏窗口显示位置的方案。具体的,手机在接收到所述第一操作之后,启动分屏显示模式后,首先在第一显示界面上弹出第二任务关联的视图(例如第二显示界面的缩略图),此时用户可以对视图执行拖拽操作,将视图从屏幕内往屏幕上/下/左/右移动,根据视图移动的方向确定分屏窗口(第二显示窗口)的位置。
示例性地,显示屏中的触控面板检触到用户输入的触摸操作带来的信号,传感器将检测到的信号转换成处理器能够处理的信息并传递给处理器,处理器中运行的内核层基于该信息生成操作对应的位置数据(具体可以包括触点坐标、触点坐标对应的时间戳等),内核层将采集到的手指位置数据上传至框架层,框架层中IMS的判定该操作为预设手势操作(例如拖拽),则将该手势上报给应用层中的SystemUI中的DragStarter,DragStarter处理用户拖拽手势的响应逻辑,决策在哪个方位(上、下、左、右)启动新的分屏窗口,并下发指令给分屏窗口统一控制中心,控制分屏显示,就可以实现基于用户操作被动触发分屏显示。
具体的,第一应用的应用开发者可通过布局(Layout)配置文件或者API调用的方式,指定要绑定分屏事件的视图(View),手机在接收到第一操作确定需要启动第二任务时,在第一任务的界面上显示该视图,用户可将其拖动到指定方位(支持:左-LEFT,右-RIGHT,上-UP,下-BOTTOM四种),手机在该指定方位形成分屏窗口(第二显示窗口),并在分屏显示窗口中显示。可选的,手机在指定方位形成分屏窗口后,可以是在收到用户单击视图的操作之后,才将第二任务的显示界面在分屏显示窗口中显示。
其中,手机确定在哪个位置启动分屏的判断算法为:
(1)当被拖动的UI视图坐标位置(x,y)超过分屏响应区域(上图所示的虚线区域),则触发对应方向的分屏;
(2)如果水平方向与垂直方向的都满足超过分屏响应区域时,则通过记录上次的坐标点(x0,y0),一段时间内的位移量dx=|x–x0|,dy=|y–y0|,通过比较dx与dy来判断当前手指移动的方向是水平方向还是竖直方向更明显。如果dx>dy,说明手指水平方向移动更明显,若x>x0,则是向右移动,触发右分屏,反之则触发左分屏;如果dx<dy,说明手指竖直方向移动更明显,若y>y0,则向下移动,触发下分屏,反之触发上分屏。
例如,参见图13,用户在淘宝界面上执行第一操作后,手机响应第一操作,启动分屏模式,此时在淘宝界面上弹出一个缩略图,如图13中的(A)所示。此时用户可以对该缩 略图进行拖拽操作,例如图13中的(B)所示的向右拖拽,或者如图13中的(C)所示的向下拖拽,或者如图13中的(D)所示的向上拖拽,或者如图13中的(E)所示的向左拖拽,等等。手机根据视图的移动方向确定分屏窗口的位置,例如图13中的(F)所示,移动方向为向下时,将显示屏幕划分为上下排列的两个窗口,其中显示第二任务(微信)的第二显示窗口在下,而显示第一任务(淘宝)的第一显示窗口在上,对于其他拖拽方式的分屏显示窗口的位置也是类似的方式,这里不再一一赘述。
在一种可能的设计中,显示屏显示该视图后,当用户对该视图进行单指长按或双指长按时,生成一可被用户拖动的一个视图副本(影子),此时用户可以对视图副本执行拖动到指定方位的操作,进而实现分屏显示第二任务的效果。如果显示屏显示该视图后,用户是单击该视图,手机则还是正常打开新页面(即退出第一任务的界面,全屏覆盖形式显示第二任务的界面)。这样,用户在执行第一操作之后,用户可以根据需求选择是否要进行分屏显示,可以提高用户体验。
一种可替换的实施方式中,所述视图的初始显示位置还可以是在第一显示界面的上/下/左/右边缘位置,例如图14所示。这样可以进一步减少视图对第一显示界面中的主要显示内容的遮挡。
在一种可能的设计中,如果第一操作之前手机已经进入分屏模式,则可以根据用户的拖拽操作重新划分整个显示屏的显示窗口,且在新划分的两个显示窗口中分别显示第一任务的显示界面(或者说是最后一次接收到用户输入操作的显示界面,即淘宝)和第二任务对应的显示界面,如图15所示。这样,可以实现根据用户需求灵活更新分屏显示窗口的效果,进一步提高用户体验。当然,还可以是在第一任务(淘宝)原先的显示区域上进行进一步的分屏划分,并保持原来的分屏窗口(桌面)的显示不变。
作为一种可选的实施方式,所述视图的面积不超过设定阈值面积,例如不超过第一显示界面的面积的三分之一、五分之一等。这样可以减少视图对第一显示界面的内容的遮挡,提高用户体验。
作为一种可选的实施方式,所述视图可以是以半透明的方式显示,这样可以进一步减少视图对第一显示界面中的主要显示内容的遮挡,提高视觉效果。
上述实施例中,从电子设备(手机100)作为执行主体的角度对本发明实施例提供的方法进行了介绍。为了实现上述本发明实施例提供的方法中的各功能,终端设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
基于相同的技术构思,本发明实施例还提供一种电子设备1600,用以执行本发明上述实施例中的分屏显示方法的步骤。请参见图16,电子设备1600包括:显示屏1601;一个或多个处理器1602;存储器1603;多个应用;以及一个或多个计算机程序;其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备实现本发明实施例上述的分屏显示方法。
其中,处理器1602可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、 分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的指令,结合其硬件完成上述方法的步骤。
当图16所示的电子设备是上述手机100时,处理器1601可以是110,显示屏1601可以是显示屏194,存储器1603可以是内部存储器121。
装置的具体实现方式的相关特征可以参照上文的方法部分,此处不再赘述。
基于相同的技术构思,本发明实施例还提供了一种电子设备,所述电子设备包括执行本发明实施例上述的分屏显示方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
基于相同的技术构思,本发明实施例还提供了一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行本发明实施例上述的分屏显示方法。
基于相同的技术构思,本发明实施例还提供了一种程序产品,当所述程序产品在计算机上运行时,使得所述计算机执行本发明实施例上述的分屏显示方法。
基于相同的技术构思,本发明实施例还提供了一种芯片,所述芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本发明实施例上述的分屏显示方法;本发明实施例中“耦合”是指两个部件彼此直接或间接地结合。
基于相同的技术构思,本发明实施例还提供了一种电子设备上的图形用户界面,所述电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行本发明实施例上述的分屏显示方法。
本发明的各个实施例可以单独使用,也可以相互结合使用,以实现不同的技术效果。
以上所述,以上实施例仅用以对本发明的技术方案进行了详细介绍,但以上实施例的说明只是用于帮助理解本发明实施例的方法,不应理解为对本发明实施例的限制。本技术领域的技术人员可轻易想到的变化或替换,都应涵盖在本发明实施例的保护范围之内。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…”或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算 机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
为了解释的目的,前面的描述是通过参考具体实施例来进行描述的。然而,上面的示例性的讨论并非意图是详尽的,也并非意图要将本发明限制到所公开的精确形式。根据以上教导内容,很多修改形式和变型形式都是可能的。选择和描述实施例是为了充分阐明本发明的原理及其实际应用,以由此使得本领域的其他技术人员能够充分利用具有适合于所构想的特定用途的各种修改的本发明以及各种实施例。

Claims (19)

  1. 一种分屏显示方法,其特征在于,所述方法包括:
    在运行第一应用执行第一任务时,在显示屏上显示所述第一任务对应的显示界面;
    在所述第一任务对应的显示界面上接收用于启动第二任务的第一操作;
    响应于所述第一操作,启动分屏显示模式。
  2. 如权利要求1所述的方法,其特征在于,在运行第一应用执行第一任务时,在显示屏上显示所述第一任务对应的显示界面,包括:
    在所述显示屏上全屏显示所述第一任务对应的显示界面。
  3. 如权利要求1所述的方法,其特征在于,
    所述第二任务为所述第一应用中的任务;或者
    所述第二任务为第二应用中的任务,且所述第二应用与所述第一应用不同。
  4. 如权利要求1所述的方法,其特征在于,响应于所述第一操作,启动分屏显示模式,包括:
    在显示屏上生成至少两个显示窗口,所述至少两个显示窗口包括第一显示窗口和第二显示窗口;在所述第一显示窗口中显示所述第一任务对应的显示界面,在所述第二显示窗口中显示所述第二任务对应的显示界面。
  5. 如权利要求1所述的方法,其特征在于,响应于所述第一操作,启动分屏显示模式,包括:
    生成所述第二任务对应的显示界面的视图,并在所述第一任务对应的显示界面上显示所述视图;
    响应于针对所述视图的第二操作,根据所述第二操作确定分屏显示窗口位置;
    在所述分屏显示窗口位置上显示分屏显示窗口,并在所述分屏显示窗口中显示所述第二任务对应的显示界面。
  6. 如权利要求5所述的方法,其特征在于,所述第二操作包括第一子操作,所述第一子操作为将所述视图或者所述视图的副本拖动至预设方位的操作;
    根据所述第二操作确定分屏显示窗口位置,包括:确定所述预设方位为分屏显示窗口位置;
    所述方法还包括:在接收到所述第一子操作之后,在所述分屏显示窗口位置上显示分屏显示窗口。
  7. 如权利要求6所述的方法,其特征在于,所述第二操作还包括在所述第一子操作之前的第二子操作,所述第二子操作为对所述视图进行单指长按或者双指长按的操作。
  8. 如权利要求6所述的方法,其特征在于,所述第二操作还包括在所述第一子操作之后的第三子操作,所述第三子操作为单击所述视图的操作;
    所述方法还包括:在接收到所述第三子操作之后,在所述分屏显示窗口中显示所述第二任务对应的显示界面。
  9. 一种电子设备,其特征在于,包括显示屏;一个或多个处理器;存储器;多个应用;以及一个或多个计算机程序;
    其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下 步骤:
    在运行第一应用执行第一任务时,在所述显示屏上显示所述第一任务对应的显示界面;
    在所述第一任务对应的显示界面上接收用于启动第二任务的第一操作;
    响应于所述第一操作,启动分屏显示模式。
  10. 如权利要求9所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:在所述显示屏上全屏显示所述第一任务对应的显示界面。
  11. 如权利要求9所述的电子设备,其特征在于,
    所述第二任务为所述第一应用中的任务;或者
    所述第二任务为第二应用中的任务,且所述第二应用与所述第一应用不同。
  12. 如权利要求9所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:在所述显示屏上生成至少两个显示窗口,所述至少两个显示窗口包括第一显示窗口和第二显示窗口;在所述第一显示窗口中显示所述第一任务对应的显示界面,在所述第二显示窗口中显示所述第二任务对应的显示界面。
  13. 如权利要求9所述的电子设备,其特征在于,当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:生成所述第二任务对应的显示界面的视图,并在所述第一任务对应的显示界面上显示所述视图;
    响应于针对所述视图的第二操作,根据所述第二操作确定分屏显示窗口位置;
    在所述分屏显示窗口位置上显示分屏显示窗口,并在所述分屏显示窗口中显示所述第二任务对应的显示界面。
  14. 如权利要求13所述的电子设备,其特征在于,所述第二操作包括第一子操作,所述第一子操作为将所述视图或者所述视图的副本拖动至预设方位的操作;
    当所述指令被所述一个或多个处理器调用执行时,使得所述电子设备执行以下步骤:确定所述预设方位为分屏显示窗口位置;
    当所述指令被所述一个或多个处理器调用执行时,还使得所述电子设备执行以下步骤:在接收到所述第一子操作之后,在所述分屏显示窗口位置上显示分屏显示窗口。
  15. 如权利要求14所述的电子设备,其特征在于,所述第二操作还包括在所述第一子操作之前的第二子操作,所述第二子操作为对所述视图进行单指长按或者双指长按的操作。
  16. 如权利要求14所述的电子设备,其特征在于,所述第二操作还包括在所述第一子操作之后的第三子操作,所述第三子操作为单击所述视图的操作;
    当所述指令被所述一个或多个处理器调用执行时,还使得所述电子设备执行以下步骤:在接收到所述第三子操作之后,在所述分屏显示窗口中显示所述第二任务对应的显示界面。
  17. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-8中任一项所述的分屏显示方法。
  18. 一种程序产品,其特征在于,当所述程序产品在计算机上运行时,使得所述计算机执行如权利要求1-8中任一项所述的分屏显示方法。
  19. 一种电子设备上的图形用户界面,其特征在于,所述电子设备具有显示屏、一个或多个存储器、以及一个或多个处理器,所述一个或多个处理器用于执行存储在所述一个或多个存储器中的一个或多个计算机程序,所述图形用户界面包括所述电子设备执行如权 利要求1至8中任一项所述的分屏显示方法时显示的图形用户界面。
PCT/CN2020/102488 2019-09-30 2020-07-16 一种分屏显示方法与电子设备 WO2021063074A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2022519756A JP2022549945A (ja) 2019-09-30 2020-07-16 分割スクリーン表示方法及び電子デバイス
US17/764,426 US20220357845A1 (en) 2019-09-30 2020-07-16 Split-screen display method and electronic device
KR1020227011757A KR20220058953A (ko) 2019-09-30 2020-07-16 화면 분할 디스플레이 방법 및 전자 디바이스
CN202080034954.8A CN113811844A (zh) 2019-09-30 2020-07-16 一种分屏显示方法与电子设备
EP20871830.4A EP4024183B1 (en) 2019-09-30 2020-07-16 Method for split-screen display and electronic apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910938898.XA CN110865744B (zh) 2019-09-30 2019-09-30 一种分屏显示方法与电子设备
CN201910938898.X 2019-09-30

Publications (1)

Publication Number Publication Date
WO2021063074A1 true WO2021063074A1 (zh) 2021-04-08

Family

ID=69652287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102488 WO2021063074A1 (zh) 2019-09-30 2020-07-16 一种分屏显示方法与电子设备

Country Status (6)

Country Link
US (1) US20220357845A1 (zh)
EP (1) EP4024183B1 (zh)
JP (1) JP2022549945A (zh)
KR (1) KR20220058953A (zh)
CN (2) CN110865744B (zh)
WO (1) WO2021063074A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114690977A (zh) * 2021-04-22 2022-07-01 广州创知科技有限公司 一种基于弹性波的交互唤起方法及装置
USD969828S1 (en) * 2021-01-12 2022-11-15 Beijing Zitiao Network Technology Co., Ltd. Display screen or portion thereof with a graphical user interface

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110865744B (zh) * 2019-09-30 2021-12-14 华为技术有限公司 一种分屏显示方法与电子设备
CN113497835B (zh) * 2020-04-01 2023-10-20 华为技术有限公司 多屏交互方法、电子设备及计算机可读存储介质
CN111694475B (zh) * 2020-04-27 2022-04-22 华为技术有限公司 终端控制方法、装置及终端设备
CN111897418B (zh) * 2020-07-02 2021-08-17 珠海格力电器股份有限公司 一种分屏显示方法、装置、设备及存储介质
CN114363462B (zh) * 2020-09-30 2023-01-06 华为技术有限公司 一种界面显示方法、电子设备及计算机可读介质
CN114327666B (zh) * 2020-09-30 2024-04-09 华为技术有限公司 应用启动方法、装置和电子设备
CN112433693B (zh) * 2020-12-11 2023-06-23 维沃移动通信(杭州)有限公司 分屏显示方法、装置及电子设备
KR20230023386A (ko) 2021-08-10 2023-02-17 삼성전자주식회사 디스플레이 모듈 출력 방법 및 상기 방법을 수행하는 전자 장치
CN116820314A (zh) * 2021-09-22 2023-09-29 荣耀终端有限公司 一种悬浮窗显示方法及电子设备
CN114546549A (zh) * 2022-01-24 2022-05-27 中国第一汽车股份有限公司 应用程序的控制方法、手势处理装置、智能终端及车辆
CN114510166B (zh) * 2022-04-01 2022-08-26 深圳传音控股股份有限公司 操作方法、智能终端及存储介质
WO2023245311A1 (zh) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 一种窗口调整方法、装置、终端及存储介质
CN115423578B (zh) * 2022-09-01 2023-12-05 广东博成网络科技有限公司 基于微服务容器化云平台的招投标方法和***
CN116048317B (zh) * 2023-01-28 2023-08-22 荣耀终端有限公司 一种显示方法及装置
CN117931043A (zh) * 2023-12-20 2024-04-26 荣耀终端有限公司 坐标转换方法、设备、芯片及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140157163A1 (en) * 2012-11-30 2014-06-05 Hewlett-Packard Development Company, L.P. Split-screen user interface
CN104898952A (zh) * 2015-06-16 2015-09-09 魅族科技(中国)有限公司 一种终端分屏实现方法及终端
CN106598429A (zh) * 2016-11-29 2017-04-26 北京小米移动软件有限公司 移动终端的窗口调整方法及装置
CN108804004A (zh) * 2018-05-03 2018-11-13 珠海格力电器股份有限公司 一种分屏控制方法、装置、存储介质及终端
CN110865744A (zh) * 2019-09-30 2020-03-06 华为技术有限公司 一种分屏显示方法与电子设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102069014B1 (ko) * 2012-09-25 2020-02-12 삼성전자 주식회사 휴대단말기의 분리화면 제어장치 및 방법
KR102099646B1 (ko) * 2012-09-25 2020-04-13 삼성전자 주식회사 휴대단말의 분할화면 전환 장치 및 방법
CN103268190A (zh) * 2013-06-05 2013-08-28 四目信息科技(上海)有限公司 触屏电子设备基于ios操作***实现视图拖拽操作的方法
JP6368462B2 (ja) * 2013-08-01 2018-08-01 シャープ株式会社 情報処理装置、情報処理方法及びそのプログラム
CN103412711A (zh) * 2013-08-27 2013-11-27 宇龙计算机通信科技(深圳)有限公司 文档对比参照方法和装置
WO2015192375A1 (zh) * 2014-06-20 2015-12-23 华为技术有限公司 应用界面的展示方法、装置及电子设备
KR102383103B1 (ko) * 2014-08-13 2022-04-06 삼성전자 주식회사 전자 장치 및 이의 화면 표시 방법
CN104331246A (zh) * 2014-11-19 2015-02-04 广州三星通信技术研究有限公司 在终端中进行分屏显示的设备和方法
JP6553719B2 (ja) * 2016-10-31 2019-07-31 ベイジン シャオミ モバイル ソフトウェア カンパニーリミテッド 画面分割表示方法および装置
CN106970769A (zh) * 2017-03-24 2017-07-21 北京小米移动软件有限公司 分屏显示方法及装置
US20180286277A1 (en) * 2017-03-28 2018-10-04 Insy Shah System and methodology for regulating instructional multimedia applications
CN107515691A (zh) * 2017-07-31 2017-12-26 努比亚技术有限公司 一种触控显示方法及移动终端、存储介质
CN107908382B (zh) * 2017-11-10 2020-03-03 维沃移动通信有限公司 一种分屏显示方法及移动终端
CN107908351B (zh) * 2017-11-30 2021-07-13 北京小米移动软件有限公司 应用界面的显示方法、装置及存储介质
CN108632462A (zh) * 2018-04-19 2018-10-09 Oppo广东移动通信有限公司 分屏显示的处理方法、装置、存储介质及电子设备
CN110244893B (zh) * 2019-05-05 2022-02-25 华为技术有限公司 一种分屏显示的操作方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140157163A1 (en) * 2012-11-30 2014-06-05 Hewlett-Packard Development Company, L.P. Split-screen user interface
CN104898952A (zh) * 2015-06-16 2015-09-09 魅族科技(中国)有限公司 一种终端分屏实现方法及终端
CN106598429A (zh) * 2016-11-29 2017-04-26 北京小米移动软件有限公司 移动终端的窗口调整方法及装置
CN108804004A (zh) * 2018-05-03 2018-11-13 珠海格力电器股份有限公司 一种分屏控制方法、装置、存储介质及终端
CN110865744A (zh) * 2019-09-30 2020-03-06 华为技术有限公司 一种分屏显示方法与电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4024183A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD969828S1 (en) * 2021-01-12 2022-11-15 Beijing Zitiao Network Technology Co., Ltd. Display screen or portion thereof with a graphical user interface
CN114690977A (zh) * 2021-04-22 2022-07-01 广州创知科技有限公司 一种基于弹性波的交互唤起方法及装置
CN114690977B (zh) * 2021-04-22 2023-11-21 广州创知科技有限公司 一种基于弹性波的交互唤起方法及装置

Also Published As

Publication number Publication date
EP4024183B1 (en) 2024-04-24
EP4024183A4 (en) 2022-11-23
CN110865744A (zh) 2020-03-06
JP2022549945A (ja) 2022-11-29
KR20220058953A (ko) 2022-05-10
CN113811844A (zh) 2021-12-17
EP4024183A1 (en) 2022-07-06
US20220357845A1 (en) 2022-11-10
CN110865744B (zh) 2021-12-14

Similar Documents

Publication Publication Date Title
WO2021063074A1 (zh) 一种分屏显示方法与电子设备
WO2021043223A1 (zh) 一种分屏显示方法及电子设备
WO2021013158A1 (zh) 显示方法及相关装置
WO2021129326A1 (zh) 一种屏幕显示方法及电子设备
WO2021037084A1 (zh) 一种分屏显示方法与电子设备
WO2021036628A1 (zh) 一种具有折叠屏的设备的触控方法与折叠屏设备
WO2021057868A1 (zh) 一种界面切换方法及电子设备
WO2021063090A1 (zh) 一种建立应用组合的方法与电子设备
WO2020062294A1 (zh) ***导航栏的显示控制方法、图形用户界面及电子设备
WO2021057343A1 (zh) 一种对电子设备的操作方法及电子设备
WO2022068483A1 (zh) 应用启动方法、装置和电子设备
WO2021063098A1 (zh) 一种触摸屏的响应方法及电子设备
US20220291794A1 (en) Display Method and Electronic Device
EP4099669A1 (en) Method for creating application shortcuts, electronic device, and system
WO2021037223A1 (zh) 一种触控方法与电子设备
WO2021078032A1 (zh) 用户界面的显示方法及电子设备
WO2022057512A1 (zh) 分屏方法、装置及电子设备
WO2022089060A1 (zh) 一种界面显示方法及电子设备
EP3958106A1 (en) Interface display method and electronic device
WO2022017393A1 (zh) 显示交互***、显示方法及设备
WO2021057699A1 (zh) 具有柔性屏幕的电子设备的控制方法及电子设备
CN114115618A (zh) 一种应用窗口显示方法与电子设备
US20230236714A1 (en) Cross-Device Desktop Management Method, First Electronic Device, and Second Electronic Device
WO2023029983A1 (zh) 一种控件内容的拖拽方法、电子设备及***
WO2023226922A1 (zh) 卡片管理方法、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20871830

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022519756

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022005991

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 20227011757

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020871830

Country of ref document: EP

Effective date: 20220331

ENP Entry into the national phase

Ref document number: 112022005991

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20220329