WO2022121775A1 - 一种投屏方法及设备 - Google Patents

一种投屏方法及设备 Download PDF

Info

Publication number
WO2022121775A1
WO2022121775A1 PCT/CN2021/135158 CN2021135158W WO2022121775A1 WO 2022121775 A1 WO2022121775 A1 WO 2022121775A1 CN 2021135158 W CN2021135158 W CN 2021135158W WO 2022121775 A1 WO2022121775 A1 WO 2022121775A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
interface
screen
data
content
Prior art date
Application number
PCT/CN2021/135158
Other languages
English (en)
French (fr)
Inventor
陈鼐
张二艳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022121775A1 publication Critical patent/WO2022121775A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a screen projection method and device.
  • the display interface of one device can be projected onto the display screen of another device for the user to view.
  • the display interface of one device can be presented on another device, which is mainly realized by the one-to-one mirror screen projection technology, that is, only one-to-one screen projection can be realized.
  • the embodiments of the present application provide a screen projection method and device, which realizes the presentation of display interfaces of multiple devices on the same device, that is, realizes many-to-one screen projection.
  • the screen projection source end creates multiple media streams and distributes them to one or more screen projection destinations according to the policy, so that the content of multiple applications in one device can be projected and displayed on other devices.
  • an embodiment of the present application provides a screen projection method.
  • the method can be applied to a first terminal.
  • the first terminal is connected to a plurality of second terminals.
  • the method may include: the first terminal selects from a plurality of second terminals. Each second terminal receives data; the first terminal displays multiple first interfaces on the first terminal according to the data received from multiple second terminals, and multiple first interfaces correspond to multiple second terminals one-to-one; wherein , the content of the first interface is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal.
  • the first terminal serving as the screen projection destination can display multiple first interfaces on the display screen of the first terminal according to data sent by multiple second terminals serving as the screen projection source.
  • the interfaces are in one-to-one correspondence with multiple second terminals.
  • the content of the first interface is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal. It realizes many-to-one projection from multiple projection sources to one projection destination. In this way, in scenarios such as meetings, conference presentations, etc., multiple mobile phones and tablet computers can project the content (such as PPT, broadcast videos) on their display screens to the same large-screen device for presentation, realizing many-to-one. 's screencast. The efficiency of collaborative use of multiple devices is improved, and the user experience is improved.
  • the method may further include: the first terminal may create multiple drawing components, and the multiple drawing components are in one-to-one correspondence with the multiple second terminals.
  • a drawing component can be a view or a canvas.
  • the first terminal displays a plurality of first interfaces on the first terminal according to the data received from the plurality of second terminals, which may include: the first terminal, according to the data received from the plurality of second terminals, displays on the plurality of drawing components respectively.
  • a first interface corresponding to the second terminal is drawn to display a plurality of first interfaces on the first terminal.
  • the method may further include: configuring the first terminal with multiple first interfaces Decoding parameters, the multiple decoding parameters are in one-to-one correspondence with multiple second terminals; the first terminal decodes the data received from the corresponding second terminals according to the multiple decoding parameters.
  • configuring corresponding decoding parameters for different second terminals, which are used to decode corresponding data multi-channel decoding is realized.
  • the method may further include: acquiring the connection information of the plurality of second terminals by the first terminal, the The connection information is used for establishing a connection between the first terminal and the corresponding second terminal; wherein, the multiple drawing components are in one-to-one correspondence with the multiple second terminals, including: the one-to-one correspondence between the multiple drawing components and the connection information of the multiple second terminals; The plurality of decoding parameters are in one-to-one correspondence with the plurality of second terminals, including: a one-to-one correspondence between the plurality of decoding parameters and the connection information of the plurality of second terminals.
  • the method may further include: the first terminal receives user feedback A first operation of the window of the first interface; in response to the first operation, the first terminal reduces, enlarges or closes the window, or switches the focus window.
  • the user can control the first interface by using the input device of the screen projection destination, for example, by setting the focus and switching the focus between the screen projection interfaces of different source devices according to user operations, or realizing independent control of different screen projection sources (such as zoom out, zoom in or close the screen projection interface).
  • the screen projection destination can also adjust the layout of the presented screen projection interface according to the increase or decrease of the source device, so as to present the best visual effect to the user.
  • the method may further include: the first terminal receives user feedback The second operation of the first interface corresponding to the second terminal; the first terminal sends the data of the second operation to the second terminal for the second terminal to display the third interface according to the second operation.
  • the first terminal After receiving the user's operation on the first interface using the input device of the screen-casting destination, such as the screen-casting interface, the first terminal sends the data corresponding to the operation to the screen-casting source corresponding to the first interface, so as to facilitate the screen-casting source
  • the terminal makes a corresponding response, so that the user can use the input device of the screen-casting destination to realize the reverse control of the screen-casting source.
  • the method may further include: the first terminal receives updated data from the second terminal;
  • the first interface corresponding to the second terminal is updated to a fourth interface, and the content of the fourth interface is a mirror image of the content of the third interface, or the content of the fourth interface is the same as part of the content of the third interface.
  • the data of the updated interface can be sent to the first terminal, so that the first terminal can update the corresponding interface displayed by the first terminal.
  • the first terminal further establishes a connection with a third terminal; the method may further include: the first terminal sends data received from multiple second terminals to the third terminal, for the third terminal
  • the terminal displays multiple first interfaces.
  • the third terminal may be a terminal that conducts a smooth call with the first terminal.
  • the terminal can also display the interface of the screen projection source to realize cross-regional office work. This cross-regional office method can improve meeting efficiency and save communication costs for cross-regional office work.
  • the method may further include: the first terminal receives video data from the third terminal; the first terminal, while the first terminal displays a plurality of first interfaces, according to the video data of the third terminal The data displays a video call screen on the first terminal.
  • the method may further include: the first terminal collects video data and sends it to the third terminal, so that the third terminal displays video data while displaying multiple first interfaces on the third terminal call screen.
  • the terminals in the two regions can not only display the video call screen, but also display the content projected by the local and the peer end, which further improves the meeting efficiency and saves the communication cost of cross-regional office work.
  • an embodiment of the present application provides a screen projection method, the method can be applied to a second terminal, the second terminal is connected to the first terminal, the method can include: the second terminal displays a second interface; the second terminal receives User operation; in response to the user operation, the second terminal sends the data of the second interface to the first terminal for the first terminal to display the first interface corresponding to the second terminal, and the first terminal also displays data related to other second terminals The corresponding first interface; wherein the content of the first interface is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal.
  • multiple second terminals serving as the source of screen projection can send the data of the current interface to the first terminal serving as the destination end of screen projection according to user triggers, so that the first terminal can, according to the data sent by the multiple second terminals,
  • Multiple first interfaces may be displayed on the display screen of the first terminal, and the multiple first interfaces are in one-to-one correspondence with multiple second terminals.
  • the content of the first interface is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal. It realizes many-to-one projection from multiple projection sources to one projection destination.
  • multiple mobile phones and tablet computers can project the content (such as PPT, broadcast videos) on their display screens to the same large-screen device for presentation, realizing many-to-one. 's screencast.
  • the efficiency of collaborative use of multiple devices is improved, and the user experience is improved.
  • the above-mentioned user operation may be an operation of starting screen projection; before the second terminal sends the data of the second interface to the first terminal, the method may further include: the second terminal obtains the data of the second interface. data; wherein, when the content of the first interface is a mirror image of the content of the second interface, the data of the second interface is the screen recording data of the second interface; the content of the first interface is the same as part of the content of the second interface In this case, the data of the second interface is the screen recording data of the layer where the predetermined element in the second interface is located.
  • multiple second terminals can project their currently displayed interface or part of the content in the interface to the first terminal for display, so as to realize many-to-one screen projection.
  • the method may further include: the second terminal displays The configuration interface includes a layer filter setting option; the second terminal receives a user's selection operation on the layer filter setting option.
  • the second terminal as the screencasting source can project the layer where some elements (such as the element dragged by the user, or the predetermined element) in the current interface are located to the screencasting purpose end to implement layer filtering. In this way, it can be ensured that the private information of the screen projection source end is not projected to the screen projection destination end, and the privacy of the user is protected.
  • the second terminal receives the user operation, which may include: the second terminal receives the user's drag operation on the second interface or an element in the second interface; sending the first terminal on the second terminal to the first terminal Before the data of the second interface, the method may further include: the second terminal determines that the user's drag intention is to drag across devices; and the second terminal obtains the data of the second interface.
  • the user can trigger screen projection by dragging the interface of the second terminal or an element in the interface.
  • the element in the case of receiving a user's drag operation on an element in the second interface, the element may be a video component, a floating window, a picture-in-picture or a free-form window.
  • the data is the screen recording data of the layer where the element is located; or, the element is a user interface (UI) control in the second interface, and the data of the second interface is the instruction stream of the second interface and the identifier of the UI control, or the second interface
  • the data are the drawing instructions and logos of UI controls.
  • the command stream corresponding to the content to be projected can be sent to the projection destination to realize the projection. In this way, the display effect of the projection interface at the projection destination can be improved, and the transmission can be saved. bandwidth.
  • an embodiment of the present application provides a screen projection device, the device can be applied to a first terminal, the first terminal is connected to a plurality of second terminals, and the device may include: a receiving unit for receiving from a plurality of second terminals Each second terminal in the terminals receives data; the display unit is configured to display a plurality of first interfaces on the first terminal according to the data received from the plurality of second terminals, and the plurality of first interfaces are connected with the plurality of second terminals.
  • the content of the first interface is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal.
  • the apparatus may further include: a creating unit, configured to create multiple drawing components, the multiple drawing components are in one-to-one correspondence with the multiple second terminals, and the drawing components are views or canvases; Displaying a plurality of first interfaces on the first terminal according to the data received from the plurality of second terminals may include: according to the data received from the plurality of second terminals, respectively drawing the first interface corresponding to the second terminal on the plurality of drawing components an interface to display a plurality of first interfaces on the first terminal.
  • the apparatus may further include: a configuration unit configured to configure a plurality of decoding parameters, and the plurality of decoding parameters are in one-to-one correspondence with the plurality of second terminals; a decoding unit configured to decode according to the plurality of second terminals parameter to decode the data received from the corresponding second terminal.
  • the apparatus may further include: an acquiring unit, configured to acquire connection information of multiple second terminals, where the connection information is used for establishing a connection between the first terminal and the corresponding second terminal; wherein the multiple The one-to-one correspondence between the drawing components and the multiple second terminals includes: a one-to-one correspondence between the multiple drawing components and the connection information of the multiple second terminals; the one-to-one correspondence between the multiple decoding parameters and the multiple second terminals, including: multiple decoding parameters The parameters are in one-to-one correspondence with connection information of multiple second terminals.
  • the apparatus may further include: an input unit, configured to receive a user's first operation on the window of the first interface; a display unit, further configured to reduce, zoom in or zoom out in response to the first operation Close the window, or switch the focus window.
  • the input unit is further configured to receive a user's second operation on the first interface corresponding to the second terminal; the apparatus may further include: a sending unit, configured to send the data of the second operation It is sent to the second terminal for the second terminal to display the third interface according to the second operation.
  • the receiving unit is further configured to receive updated data from the second terminal; the display unit is further configured to update the first interface corresponding to the second terminal to a fourth interface according to the updated data , the content of the fourth interface is a mirror image of the content of the third interface, or the content of the fourth interface is the same as part of the content of the third interface.
  • the first terminal further establishes a connection with the third terminal; the sending unit is further configured to send data received from multiple second terminals to the third terminal, so that the third terminal can display multiple a first interface.
  • the receiving unit is further configured to receive video data from the third terminal; the display unit is further configured to, while the first terminal displays a plurality of first interfaces, according to the video data of the third terminal The data displays a video call screen on the first terminal.
  • the apparatus may further include: a collection unit, configured to collect video data; a sending unit, further configured to send the video data to a third terminal, for the third terminal to display on the third terminal While multiple first interfaces are displayed, a video call screen is displayed.
  • an embodiment of the present application provides a screen projection device, the device can be applied to a second terminal, the second terminal is connected to the first terminal, the device can include: a display unit for displaying a second interface; an input unit , used to receive user operation; the sending unit is used to send data of the second interface to the first terminal in response to the user operation, so that the first terminal can display the first interface corresponding to the second terminal, and the first terminal also displays the data of the second interface.
  • the user operation is an operation of starting screen projection; the device may further include: an acquiring unit, configured to acquire data of the second interface; wherein the content in the first interface is the content of the second interface In the case of mirroring, the data of the second interface is the screen recording data of the second interface; when the content of the first interface is the same as part of the content of the second interface, the data of the second interface is the location of the predetermined element in the second interface The screen recording data of the layer.
  • an acquiring unit configured to acquire data of the second interface
  • the content in the first interface is the content of the second interface
  • the data of the second interface is the screen recording data of the second interface
  • the data of the second interface is the location of the predetermined element in the second interface The screen recording data of the layer.
  • the display unit is further configured to display a configuration interface, where the configuration interface includes a layer filter setting option; the input unit is further configured to receive a user's selection operation on the layer filter setting option.
  • the input unit receives a user operation, which may include: the input unit receives a user's drag operation on the second interface or an element in the second interface; the apparatus may further include: a determination unit for determining The dragging intention of the user is to drag across devices; the obtaining unit is also used to obtain the data of the second interface.
  • the element in the case of receiving a user's drag operation on an element in the second interface, the element may be a video component, a floating window, a picture-in-picture or a free-form window.
  • the data is the screen recording data of the layer where the element is located; or, the element can be a user interface UI control in the second interface, and the data of the second interface is the instruction flow of the second interface and the identifier of the UI control, or the second interface.
  • the data is the drawing instruction and identification of the UI control.
  • an embodiment of the present application provides a screen projection method, which is applied to a first terminal.
  • the method may include: the first terminal displays an interface of the first application; the first terminal receives a first operation; and in response to the first operation, The first terminal sends data of the interface of the first application to the second terminal for the second terminal to display the first interface, and the content of the first interface is a mirror image of the interface content of the first application, or the content of the first interface is the same as that of the first interface.
  • the first terminal receives the second operation; in response to the second operation, the first terminal displays the interface of the second application; the first terminal receives the third operation; In the case of an interface of an application, in response to the third operation, the first terminal sends data of the interface of the second application to the third terminal for the third terminal to display the second interface, and the content of the second interface is the second application
  • the mirror image of the interface content of the second application, or the content of the second interface is the same as part of the interface content of the second application.
  • the first terminal serving as the screen projection source can realize the projection of the contents of multiple applications of the first terminal to one or more screen projection destinations by creating multiple media streams, which satisfies the requirement of multi-task parallelism , which can improve the use efficiency of the terminal and improve the user experience.
  • the method may further include: the first terminal creates a first virtual display; the first terminal draws the interface of the first application or the first element in the interface of the first application to the first virtual display, to obtain the data of the interface of the first application; the first terminal creates a second virtual display; the first terminal draws the interface of the second application or the second element in the interface of the second application to the second virtual display to obtain the second application interface data.
  • the first terminal creates a virtual display
  • screen recording is performed on the content of the projection source end, so as to realize the display of the content at the projection source end on the projection destination end, and support mirror projection and hetero-source projection.
  • the method may further include: the first terminal sends audio data of the first application to the second terminal for the second terminal to output corresponding audio; the first terminal sends the second terminal to the third terminal The audio data of the application is used for the third terminal to output corresponding audio. In this way, the projection display of the audio data from the projection source end to the projection target end is supported.
  • the method may further include: the first terminal creates a first audio recording (AudioRecord) object, and records and obtains audio data of the first application based on the first AudioRecord object; the first terminal creates a second AudioRecord object , and record and obtain audio data of the second application based on the second AudioRecord object.
  • the first terminal creates a first audio recording (AudioRecord) object, and records and obtains audio data of the first application based on the first AudioRecord object
  • the first terminal creates a second AudioRecord object , and record and obtain audio data of the second application based on the second AudioRecord object.
  • the second terminal is the same as the third terminal.
  • an embodiment of the present application provides a screen projection method, which is applied to a second terminal.
  • the method may include: the second terminal receives data from an interface of a first application of the first terminal; and the second terminal displays the first interface , the content of the first interface is a mirror image of the interface content of the first application, or the content of the first interface is the same as part of the content of the interface of the first application; the second terminal receives data from the interface of the second application of the first terminal; The second terminal displays a third interface.
  • the third interface includes the content of the first interface and the content of the second interface.
  • the content of the second interface is a mirror image of the interface content of the second application, or the content of the second interface and the content of the second application. Parts of the interface are the same.
  • the second terminal serving as the screen projection destination can receive multiple media streams from the first terminal serving as the screen projection source, so as to realize the projection of the contents of multiple applications of the first terminal to the second terminal, which satisfies many To meet the requirements of parallel tasks, this can improve the efficiency of the terminal and improve the user experience.
  • an embodiment of the present application provides a screen projection device, which is applied to a first terminal.
  • the device may include: a display unit for displaying an interface of the first application; an input unit for receiving a first operation; a sending unit , used to send the data of the interface of the first application to the second terminal in response to the first operation, for the second terminal to display the first interface, the content of the first interface is a mirror image of the interface content of the first application, or the first interface
  • the content of the interface is the same as part of the content of the interface of the first application;
  • the input unit is also used to receive the second operation;
  • the display unit is also used to display the interface of the second application in response to the second operation; the input unit is also used to receiving a third operation;
  • the sending unit is further configured to, in the case where the first terminal projects the interface of the first application to the second terminal, in response to the third operation, send the data of the interface of the second application to the third terminal, for The third terminal is displaying the second interface, and the content
  • the apparatus may further include: a creating unit, configured to create a first virtual display; a drawing unit, configured to draw the interface of the first application or the first element in the interface of the first application to the first virtual display. a virtual display to obtain the data of the interface of the first application; the creation unit is also used to create a second virtual display; the drawing unit is also used to draw the interface of the second application or the second element in the interface of the second application to The second virtual display is used to obtain data of the interface of the second application.
  • the sending unit is further configured to send the audio data of the first application to the second terminal, so that the second terminal outputs corresponding audio; send the audio data of the second application to the third terminal, use The corresponding audio is output to the third terminal.
  • the creation unit is further configured to create a first AudioRecord object; the apparatus may further include: a recording unit, configured to record and obtain audio data of the first application based on the first AudioRecord object; the creation unit, It is also used to create a second AudioRecord object; the recording unit is also used to record and obtain the audio data of the second application based on the second AudioRecord object.
  • the second terminal is the same as the third terminal.
  • an embodiment of the present application provides a screen projection device, which is applied to a second terminal.
  • the device may include: a receiving unit, configured to receive data from an interface of the first application of the first terminal; a display unit, configured to Displaying a first interface, the content of the first interface is a mirror image of the interface content of the first application, or the content of the first interface is the same as part of the content of the interface of the first application; the receiving unit is further configured to receive the first interface from the first terminal.
  • the data of the interface of the second application is further configured to display a third interface, where the third interface includes the content of the first interface and the content of the second interface, and the content of the second interface is a mirror image of the interface content of the second application, or The content of the second interface is the same as part of the content of the interface of the second application.
  • an embodiment of the present application provides a screen projection device, the device may include: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions so that the screen projection device implements the following: The method described in any one of the first aspect or possible implementations of the first aspect, or the screen projection device is made to implement the method described in any one of the second aspect or possible implementations of the second aspect, or the projection device is made to The screen device implements the method according to any one of the fifth aspect or possible implementation manners of the fifth aspect, or enables the screen projection device to implement the method according to the sixth aspect.
  • embodiments of the present application provide a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by an electronic device, the electronic device can implement the first aspect or the possible implementation manner of the first aspect. Any of the methods described in any of the above, or implementing the method described in any one of the second aspect or possible implementation manners of the second aspect, or enabling the screen projection device to implement the fifth aspect or the possible implementation manners of the fifth aspect Any one of the methods, or make the screen projection device implement the method according to the sixth aspect.
  • an embodiment of the present application provides a screen projection system
  • the system may include a first terminal and a plurality of second terminals; each second terminal in the plurality of second terminals is used to display a second interface; After receiving the user operation, send the data of the second interface to the first terminal; the first terminal is used to receive data from each of the plurality of second terminals; according to the data received from the plurality of second terminals, A plurality of first interfaces are displayed on the first terminal, and the plurality of first interfaces are in one-to-one correspondence with the plurality of second terminals; wherein the content of the first interface is a mirror image of the content of the second interface displayed corresponding to the second terminal, or the The content of an interface is the same as part of the content of the second interface corresponding to the second terminal.
  • an embodiment of the present application provides an electronic device (such as the above-mentioned first terminal or second terminal), the electronic device includes a display screen, one or more processors and a memory; the display screen, the processor and the memory are coupled
  • the memory is used to store computer program codes, the computer program codes include computer instructions, and when the computer instructions are executed by the electronic device, the electronic device is made to perform the method as described in any one of the first aspect or the possible implementation modes of the first aspect , or execute the method described in any one of the second aspect or the possible implementation manner of the second aspect, or cause the screen projection device to implement the method described in any one of the fifth aspect or the possible implementation manner of the fifth aspect , or make the screen projection device implement the method described in the sixth aspect.
  • embodiments of the present application provide a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic device
  • the processor in the electronic device executes the method described in any one of the first aspect or possible implementations of the first aspect, or executes any one of the second aspect or possible implementations of the second aspect
  • the described method either enables the screen projection device to implement the method described in any one of the fifth aspect or possible implementation manners of the fifth aspect, or enables the screen projection device to implement the method described in the sixth aspect.
  • the electronic device described in the second aspect, and the beneficial effects that can be achieved by the computer program product described in the thirteenth aspect reference may be made to the first aspect or the second aspect or the fifth aspect or the sixth aspect and any possibility thereof The beneficial effects in the implementation manner of , will not be repeated here.
  • FIG. 1A is a schematic diagram of a scenario provided by an embodiment of the present application.
  • FIG. 1B is a simplified schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the composition of a software architecture provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a display interface provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another screen projection method provided by an embodiment of the present application.
  • FIG. 7 is another schematic diagram of a display interface provided by an embodiment of the present application.
  • FIG. 8 is another schematic diagram of a display interface provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another screen projection method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 21 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 22 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 23 is another schematic diagram of a display interface provided by an embodiment of the present application.
  • FIG. 24 is another schematic diagram of a display interface provided by an embodiment of the present application.
  • FIG. 25 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 26 is a schematic diagram of another display interface provided by an embodiment of the present application.
  • FIG. 27 is a schematic diagram of the composition of a screen projection device according to an embodiment of the present application.
  • FIG. 28 is a schematic diagram of the composition of another screen projection device provided by an embodiment of the present application.
  • FIG. 29 is a schematic diagram of the composition of another software architecture provided by an embodiment of the present application.
  • FIG. 30 is another schematic diagram of a display interface provided by an embodiment of the present application.
  • FIG. 31 is a schematic diagram of data transmission provided by an embodiment of the present application.
  • FIG. 32 is another schematic diagram of data transmission provided by an embodiment of the present application.
  • FIG. 34 is a schematic diagram of the composition of a chip system according to an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • a user can connect multiple terminals to work together. For example, after two terminals are connected, collaborative office between the two terminals can be realized by using multi-screen collaboration.
  • Multi-screen collaboration can use the mirror projection method to project the interface displayed by one terminal to the display screen of another terminal for display.
  • a terminal that projects its display interface can be called a screen-casting source, or a source (source)
  • a terminal that receives the projection from the screen-casting source and displays the display interface of the screen-casting source is called a screen-casting The destination end, or called the sink end.
  • the screen projected by the screencast source displayed on the screencast destination is called the screencasting interface
  • the window used by the screencasting destination to display the screencasting interface is called the screencasting window.
  • the mirror screen projection method can only realize the display of the display interface of one terminal to another terminal, that is, only one-to-one screen projection can be realized.
  • display interfaces of multiple terminals may be required to be presented on the same terminal (eg, a large-screen device), that is, there is a many-to-one screen projection requirement.
  • wireless screen projectors such as AWIND projectors
  • Wireless projection gateway realizes the projection of the interfaces of multiple terminals to a terminal display screen.
  • this technology for implementing many-to-one screen projection requires the help of a corresponding wireless screen projection device.
  • the embodiment of the present application provides a screen projection method, which can be applied to a screen projection scenario.
  • the display interfaces of multiple terminals can be displayed on the same terminal display screen without the help of other devices, which satisfies the requirements for many-to-one screen projection in scenarios such as meetings and conference presentations, and improves the The efficiency of multi-terminal collaborative use is improved, and the user experience is improved.
  • DLNA digital media server
  • DMP digital media player
  • DMS provides the ability to acquire, record, store and serve as a source of media streams, such as providing content to various DMPs and sharing content with other devices in the network.
  • DMS can be regarded as a multimedia network disk.
  • DMP can find and play any media file provided by DMS.
  • computers and TVs support DLNA, and users need to manually turn it on.
  • DLNA does not have a connection state, and it can be successfully connected by connecting to the same local area network by default.
  • DLNA only supports the delivery of multimedia files (such as pictures, audio, and video). After delivery, the DMS displays the control interface and does not play synchronously. In addition, DLNA only sends the pictures, audio or video of the mobile phone to the large screen for display or playback. For online video, third-party application support is required, and the TV (box) or large screen needs to support DLNA. Since DLNA is essentially a uniform resource locator (URL) that pushes a resource, when multiple devices serve as DMS to deliver content to the same target device as DMP, the preemptive method is adopted, that is, which device sends the content last content, the target device plays its media file.
  • URL uniform resource locator
  • Miracast is a wireless display standard based on Wi-Fi Direct, established by the Wireless Fidelity (Wi-Fi) Alliance in 2012.
  • Miracast is a mirror projection, that is, the interface of the projection source and the projection destination are exactly the same, which is suitable for remote sharing.
  • Devices that support this standard can share video footage wirelessly.
  • a mobile phone can play a movie or photo directly on a large screen such as a TV through Miracast without being affected by the length of the connecting cable.
  • Miracast requires accessory support.
  • not all devices support Miracast For example, PCs have only supported Miracast since Windows 8.1, and PCs with earlier versions of Windows are not supported.
  • mirror projection needs to send a large number of real-time encoded data streams, which has high requirements on network quality.
  • AirPlay is Apple A wireless technology developed to wirelessly transmit pictures, audio or video from an iOS device to an AirPlay-enabled device via Wi-Fi.
  • AirPlay has a mirroring function that DLNA does not have, and can wirelessly transmit images from iOS devices such as mobile phones and tablets to the TV. That is to say, whatever is displayed on the iOS device will be displayed on the TV screen, and it is not limited to pictures and videos.
  • AirPlay only works with Apple A certified device or an authorized partner's device.
  • AirPlay is not open source, and There are also limitations to device interaction.
  • the above technology for realizing multi-screen collaboration can only realize the projection display of the content corresponding to one application of a device to another device, that is, it can only realize one-to-one screen projection, or can only realize one application of a device to other devices. "Transfer" on, and can not achieve true multi-task parallelism.
  • the terminal can also realize the projection and display of the content of one or more applications of the terminal on other terminals by creating multiple media streams, so as to meet the requirements of multi-tasking and improve the performance of the terminal. Use efficiency and improve user experience.
  • FIG. 1B shows a simplified schematic diagram of a system architecture to which embodiments of the present application can be applied.
  • the system architecture may include: a first terminal 101 and at least one second terminal 102 .
  • each second terminal 102 it can establish a connection with the first terminal 101 in a wired or wireless manner. Based on the established connection, the first terminal 101 and the second terminal 102 may be used together in cooperation.
  • the wireless communication protocol adopted when the first terminal 101 and the second terminal 102 establish a connection in a wireless manner may be Wi-Fi protocol, Bluetooth (Bluetooth) protocol, ZigBee protocol, Near Field Communication (Near Field Communication) , NFC) protocol, etc., and may also be various cellular network protocols, which are not specifically limited here. Different wireless communication protocols used when the second terminal 102 establishes the connection with the first terminal 101 may be the same or different.
  • the screen-casting source in the first terminal 101 and the multiple second terminals 102 can display the interface displayed on the display screen or Some elements in the interface are projected on the display screen of the projection destination.
  • the first terminal 101 as the screen projection destination and multiple second terminals 102 as the screen projection source as an example.
  • Each second terminal 102 in the plurality of second terminals 102 may project the interface displayed on its display screen or some elements in the interface to the display screen of the first terminal 101 for display.
  • the first terminal 101 may aggregate the interfaces of multiple second terminals 102 and display them on the display screen of the first terminal 101 for the user to view.
  • the user can also use the input device of the first terminal 101 to operate on the screen projection interface corresponding to each second terminal 102 displayed on the display screen of the first terminal 101, so as to realize the corresponding interface of the actual interface displayed in the second terminal 102. operate.
  • the screen projection source ends in the first terminal 101 and the second terminal 102 can create a multi-channel media stream, one or more of the The content of each application is projected to the display screen of the projection destination.
  • the first terminal 101 can project the content of one or more applications in the first terminal 101 to display on the display screen of at least one second terminal 102 by creating multiple media streams, so as to meet the requirement of parallel multitasking.
  • the first terminal 101 can project the contents of multiple applications in the first terminal 101 to display on one or more display screens of the second terminal 102 by creating multiple media streams.
  • the first terminal 101 may project the content of an application in the first terminal 101 to display screens of multiple second terminals 102 by creating multiple media streams.
  • the terminals in the embodiments of the present application may be mobile phones, tablet computers, handheld computers, personal computers (PCs), cellular phones, Personal digital assistants (PDAs), wearable devices (such as smart watches), in-vehicle computers, game consoles, and augmented reality (AR) ⁇ virtual reality (VR) devices, etc.
  • PDAs Personal digital assistants
  • wearable devices such as smart watches
  • in-vehicle computers game consoles
  • AR augmented reality
  • VR virtual reality
  • the technical solutions provided in this embodiment can be applied to other electronic devices, such as smart home devices (eg, TV sets), in addition to the above-mentioned terminals (or mobile terminals).
  • the device shapes of the first terminal 101 and the second terminal 102 may be the same or different.
  • the device forms of the multiple second terminals 102 may be the same or different, which is not limited in this embodiment.
  • the first terminal 101 may be a large-screen device such as a PC and a TV
  • the second terminal 102 may be a mobile device such as a mobile phone and a tablet computer.
  • the first terminal 101 may be a mobile device such as a mobile phone and a tablet
  • the second terminal 102 may be a large-screen device such as a PC and a TV.
  • the first terminal 101 is a television
  • the plurality of second terminals 102 are mobile phones as an example, but this embodiment is not limited to this.
  • the terminal is a mobile phone as an example.
  • FIG. 2 is a schematic structural diagram of a mobile phone according to an embodiment of the present application. The methods in the following embodiments can be implemented in a mobile phone having the above-mentioned hardware structure.
  • the mobile phone may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1.
  • Antenna 2 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193 and display screen 194, etc.
  • the mobile phone may further include a mobile communication module 150, a subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an environmental sensor Light sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the mobile phone.
  • the cell phone may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller can be the nerve center and command center of the phone.
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM interface, and/or USB interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM interface SIM interface
  • USB interface etc.
  • the charging management module 140 is used to receive charging input from the charger. While the charging management module 140 charges the battery 142 , it can also supply power to the mobile phone through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 can also receive the input of the battery 142 to supply power to the mobile phone.
  • the wireless communication function of the mobile phone can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in a cell phone can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the mobile phone.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the mobile phone including wireless local area networks (WLAN) (such as Wi-Fi networks), bluetooth (BT), global navigation satellite system (GNSS), Solutions for wireless communication such as frequency modulation (FM), NFC, infrared technology (infrared, IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC infrared technology
  • IR infrared
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify the signal, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the mobile phone is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the mobile phone can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the mobile phone realizes the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the handset may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the mobile phone can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 194 and the application processor.
  • the mobile phone may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the mobile phone can implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the gyroscope sensor 180B can be used to determine the motion attitude of the mobile phone.
  • the air pressure sensor 180C is used to measure air pressure.
  • the magnetic sensor 180D includes a Hall sensor.
  • the mobile phone can use the magnetic sensor 180D to detect the opening and closing of the flip holster.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the mobile phone in various directions (generally three axes).
  • Distance sensor 180F for measuring distance.
  • the mobile phone can use the proximity light sensor 180G to detect the user holding the mobile phone close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the fingerprint sensor 180H is used to collect fingerprints. The mobile phone can use the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture with the fingerprint, answer the incoming call with the fingerprint, etc.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the mobile phone, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • Motor 191 can generate vibrating cues. The motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the mobile phone.
  • the mobile phone can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the mobile phone interacts with the network through the SIM card to realize functions such as calls and data communication.
  • the handset employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the mobile phone and cannot be separated from the mobile phone.
  • the embodiments of the present application exemplarily illustrate the software architectures of the first terminal 101 and the second terminal 102 .
  • the first terminal 101 is used as the screen projection destination end
  • the second terminal 102 is used as the screen projection source end as an example.
  • FIG. 3 is a schematic diagram of the composition of a software architecture provided by an embodiment of the present application.
  • the software architectures of both the first terminal 101 and the second terminal 102 may include: an application layer and a framework layer (framework, FWK).
  • the first terminal 101 may include: a network management module, a decoding module and a window management module.
  • Each module included in the first terminal 101 may be included in any layer of the software architecture of the first terminal 101 .
  • the network management module of the first terminal 101, the decoding module and the window management module are all included in the framework layer of the first terminal 101, which is not specifically limited in this embodiment.
  • the first terminal 101 may also include an application program, which may be included in the above-mentioned application layer.
  • the application program may include a screencasting application, and the screencasting application may assist the first terminal 101 serving as the screencasting destination to implement a many-to-one screencasting function.
  • the second terminal 102 may include: a network management module, an encoding module and a setting module. Each module included in the second terminal 102 may be included in any layer of the software architecture of the second terminal 102 .
  • the network management module and the encoding module of the second terminal 102 are included in the framework layer of the second terminal 102 .
  • the setting module of the second terminal 102 is included in the application layer of the second terminal 102, which is not specifically limited in this embodiment.
  • the second terminal 102 may also include an application program, which may be included in the above-mentioned application layer.
  • the application program may include a screen projection application, and the screen projection application can assist the second terminal 102 serving as the screen projection source end to implement a many-to-one screen projection function.
  • the network management module of the first terminal 101 may be responsible for establishing a transmission channel between the first terminal 101 and the second terminal 102 .
  • the network management module of the first terminal 101 can support the establishment of transmission channels between the first terminal 101 and a plurality of second terminals 102, that is, supports the establishment of a 1-to-N connection.
  • the decoding module of the first terminal 101 may be responsible for decoding the data from the second terminal 102 serving as the screen projection source end (for example, called screen projection data, and may also be called screen recording data).
  • the decoding module supports multi-channel decoding. For example, for data from different second terminals 102, the decoding module of the first terminal 101 can use different decoding parameters to decode the corresponding data.
  • the window management module of the first terminal 101 may be responsible for presenting multiple screen projection windows on the first terminal 101 according to the decoded multi-channel data.
  • the plurality of screen projection windows are in one-to-one correspondence with the plurality of second terminals 102 .
  • the content in the screen projection window is the same as all or part of the content of the interface presented by the corresponding second terminal 102 .
  • the window management module of the first terminal 101 is also responsible for dynamically increasing and decreasing the screen projection window on the first terminal 101, and reducing, enlarging, and switching the focus window on the screen projection window presented on the first terminal 101 according to user operations.
  • the network management module of the second terminal 102 may be responsible for establishing a transmission channel between the second terminal 102 and the first terminal 101 .
  • the encoding module of the second terminal 102 may be responsible for encoding the currently displayed interface or data corresponding to some elements in the interface (for example, called screen projection data).
  • the setting module of the second terminal 102 may be responsible for setting audio and video parameters according to user settings, and the audio and video parameters may include resolution, horizontal and vertical screen, homologous/heterogeneous, layer filtering, and the like.
  • homologous/heterogeneous may refer to whether the current interface will continue to be displayed on the second terminal 102 after the second terminal 102 casts the screen, and homologous means that the second terminal 102 will continue to display the current interface after the second terminal 102 casts the screen.
  • the source means that the second terminal 102 does not continue to display the current interface after the screen is projected by the second terminal 102 .
  • the first terminal 101 is a television set
  • the plurality of second terminals 102 are mobile phones (for example, the plurality of second terminals 102 include a mobile phone 1 and a mobile phone 2 ) as an example
  • the present application will be discussed with reference to the accompanying drawings.
  • the screen projection method provided by the embodiment will be introduced in detail.
  • FIG. 4 is a schematic flowchart of a screen projection method provided by an embodiment of the present application. As shown in FIG. 4, the method may include the following S401-S406.
  • the mobile phone 1 establishes a connection with the TV set, and the mobile phone 2 establishes a connection with the TV set.
  • the display interfaces of multiple terminals such as the second terminal, such as the above-mentioned mobile phone 1 and mobile phone 2
  • the same terminal such as the first terminal, such as the above-mentioned TV set
  • the multi-pair When one screen is projected the plurality of second terminals can be respectively connected to the first terminal.
  • the first terminal and the second terminal may establish a connection in a wired manner.
  • a wired connection can be established between the mobile phone 1 and the TV set through a data line.
  • a wired connection can be established between the mobile phone 2 and the television set through a data line.
  • the first terminal and the second terminal may establish a connection wirelessly.
  • the connection information may be a device identifier of the terminal, such as an internet protocol (internet protocol, IP) address, a port number, or an account logged in by the terminal, and the like.
  • IP internet protocol
  • the account logged in by the terminal may be an account provided by the operator for the user, such as a Huawei account.
  • the account logged in by the terminal can also be an application account, such as WeChat Account, Youku account, etc.
  • the transmission capability of the terminal may be near-field communication capability or long-distance communication capability. That is to say, between terminals, for example, the wireless communication protocol used to establish a connection between the mobile phone 1 (or mobile phone 2) and the TV set may be a near field communication protocol such as a Wi-Fi protocol, a Bluetooth protocol or an NFC protocol, or a cellular network protocol. .
  • different second terminals may establish a connection with the first terminal in the same manner or different.
  • the manner in which the TV establishes a connection with the mobile phone 1 may be the same as or different from the manner in which the connection is established with the mobile phone 2.
  • the manner in which the connection is established with the mobile phone 2 There is no specific limitation in this embodiment.
  • a plurality of second terminals all establish connections with the first terminal in a wireless manner.
  • the user wants to implement many-to-one screen projection from multiple second terminals to the first terminal, that is, multiple second terminals, such as mobile phone 1 and mobile phone 2, are the source ends of the screen projection, and the first terminal, such as a TV, is the destination end of the screen projection.
  • the user can manually enable the screen projection service function of the TV set serving as the screen projection destination (it may also be referred to as a many-to-one screen projection function).
  • the screen projection service function of the TV can also be automatically turned on, for example, when the TV is turned on. After the screen projection service function of the TV is turned on, the TV can obtain connection information, such as IP addresses, of each screen projection source (eg, mobile phone 1 and mobile phone 2 ).
  • the TV can obtain the connection information of each second terminal serving as the screen projection source end in the following manner.
  • the connection information of each second terminal may be manually input by the user.
  • the TV may display a configuration interface 1 for the user to input connection information of each second terminal, such as an IP address.
  • the television can obtain the connection information of each second terminal.
  • the configuration interface 1 the number of controls (eg, input boxes) for the user to input connection information may be fixed (eg, 2, 3 or more, which is not specifically limited in this embodiment).
  • the user can input the connection information of the second terminal in the control.
  • the amount of connection information entered by the user can be equal to or less than the number of controls. It can be understood that the number of connection information input by the user is the same as the number of screen projection sources that can be connected to the TV.
  • the TV can display a configuration interface 501 , which includes an input box 502 and an input box 503 for the user to input connection information.
  • the user may input connection information, such as an IP address, of the second terminal serving as the screen projection source in the input box 502 and the input box 503 respectively.
  • connection information such as an IP address
  • the user inputs the IP address of the mobile phone 1: 192.168.43.164 in the input box 502, and the IP address of the mobile phone 2 in the input box 503: 192.168.43.155.
  • the TV can obtain the connection information of each second terminal from the configuration interface 501 .
  • the aggregation button 504 in the configuration interface 501 may be operated, such as a click operation.
  • the TV set can obtain the connection information of each second terminal from the configuration interface 501, such as IP address: 192.168.43.164 and IP address: 192.168.43.155.
  • the IP address: 192.168.43.164 and the IP address: 192.168.43.155 can be obtained from the configuration interface 501 by the window management module of the television.
  • the connection information of each second terminal serving as the screen projection source end may be monitored by the TV.
  • the mobile phone 1, the mobile phone 2 and the TV set have the Bluetooth function turned on.
  • the TV can start to perform the device discovery process.
  • the TV has Bluetooth monitoring enabled.
  • the second terminal serving as the screen projection source end such as the mobile phone 1 and the mobile phone 2
  • the Bluetooth function enabled it can send a Bluetooth broadcast.
  • the TV can receive the Bluetooth broadcast sent by the second terminal.
  • the TV may also exchange connection information, such as an IP address, with the discovered device (such as the above-mentioned second terminal).
  • the TV may send notification messages to the second terminals, such as the mobile phone 1 and the mobile phone 2, respectively, to notify them to report their own IP addresses.
  • the TV set eg, the network management module of the TV set
  • the TV set can receive the IP addresses from the second terminals, such as the mobile phone 1 and the mobile phone 2 .
  • the Bluetooth broadcast TV sent by all the terminals within the monitoring range can be monitored.
  • the television set may send the above notification message to all monitored terminals, so that they can report their own connection information. If the TV monitors the Bluetooth broadcast of the mobile phone 1 and the mobile phone 2, it sends the above notification message to both the mobile phone 2 and the mobile phone 2.
  • the TV may display a list of discovered devices.
  • the discovered device list includes the identifiers of all terminals monitored by the TV, for example, including the identifier of the mobile phone 1 and the identifier of the mobile phone 2 .
  • the discovered device list is for the user to select the terminal that the user wants to connect with the TV.
  • the television set may only send the above notification message to the terminal selected by the user. For example, if the user selects the identification of the mobile phone 1 and the identification of the mobile phone 2 in the discovery device list, the TV can send the above notification message to the mobile phone 1 and the mobile phone 2.
  • the television After acquiring the connection information of each second terminal, the television can establish a connection with the corresponding second terminal according to each acquired connection information.
  • the wireless communication protocol adopted when the TV sets establish the connection with each second terminal may be the same or different, which is not specifically limited in this embodiment.
  • the TV can establish a connection with the mobile phone 1 using the Wi-Fi protocol according to the IP address 192.168.43.164 of the mobile phone 1, and establish a connection with the mobile phone 2 using the Wi-Fi protocol according to the IP address 192.168.43.155 of the mobile phone 2.
  • the TV can establish a connection with the mobile phone 1 using the Wi-Fi protocol according to the IP address 192.168.43.164 of the mobile phone 1, and establish a connection with the mobile phone 2 using the Bluetooth protocol according to the IP address 192.168.43.155 of the mobile phone 2.
  • the process of establishing a connection between the TV and the second terminal may be: the network management module of the TV initiates a network connection to the second terminal according to the IP address, such as sending Establish a connection request.
  • the network management module of the second terminal completes the establishment of the connection with the TV set.
  • the connection information of each second terminal is specifically obtained by the window management module of the TV.
  • the window management module of the TV can send the obtained connection information of each second terminal to the network management module of the TV, so that the network management module of the TV can initiate network connection.
  • the TV creates views corresponding to the mobile phone 1 and the mobile phone 2 respectively, and configures decoding parameters corresponding to the mobile phone 1 and the mobile phone 2 respectively.
  • the terminal serving as the screen projection source end can project the interface displayed on its display screen to display on the display screen of the terminal serving as the screen projection destination end.
  • multiple second terminals are used as the source end of screen projection
  • the first terminal is used as the destination end of screen projection, that is, multiple second terminals can project the interface displayed on their display screens. It is displayed on the display screen of the first terminal to realize many-to-one screen projection.
  • the first terminal serving as the screen projection destination can perform the following preparations:
  • the first terminal may create a corresponding view (view), using for rendering the interface projected by the second terminal.
  • views may be drawing components in the embodiments of the present application.
  • the user can input each second terminal through the Terminal connection information, such as IP address.
  • the first terminal such as the window management module of the first terminal, can obtain the IP addresses of the second terminals from the configuration interface 1 (eg, step 1 in FIG. 6 ).
  • the first terminal may locally store an array, such as array 1.
  • the array 1 includes the IP addresses of each second terminal serving as the screen projection source end.
  • the first terminal may, according to the array 1, create a corresponding view for each second terminal serving as the screen projection source, for rendering the interface projected by each second terminal.
  • a view array is created by the window management module of the first terminal, and the view array may include: views corresponding to the IP addresses in the array 1 one-to-one (eg, step 2 in FIG. 6 ).
  • the first terminal configures decoding parameters for each of the plurality of second terminals for decoding screen projection data from each of the second terminals.
  • the specific implementation of projecting the currently displayed interface by the screencasting source end to the screencasting destination end may be that the screencasting source end obtains the data corresponding to the currently displayed interface, such as screencasting data, and sends it to the screencasting destination. terminal, so that the destination terminal of the projection screen displays the corresponding content on its display screen.
  • the screen projection source terminal transmits the screen projection data
  • the screen projection data can be encoded, and the encoded screen projection data can be transmitted to the screen projection destination end.
  • the screen-casting destination after receiving the screen-casting data from the screen-casting source, it can decode it.
  • the first terminal may use the same decoding parameters to decode screen projection data from different second terminals, or use different decoding parameters to decode screen projection data from different second terminals.
  • the screen projection data of the second terminal is decoded.
  • the window management module of the first terminal after the window management module of the first terminal successfully creates the view corresponding to each IP address, the The window management module may configure decoding parameters associated with the corresponding IP address in the decoding module of the first terminal (eg, step 3 in FIG. 6 ).
  • the window management module of the first terminal may configure decoding parameters associated with the corresponding IP address in the decoding module through a callback function after the view is successfully created.
  • the first terminal may configure different decoding parameters for each of the second terminals for decoding the screen projection data from each of the second terminals.
  • the above-mentioned decoding parameters may be negotiated between the first terminal and the second terminal, or may be pre-configured on the first terminal, which is not specifically limited in this embodiment.
  • the above-mentioned decoding parameters may include: the distribution mode of the video stream, the specification of the video stream, the video encoding format, the bit rate of the video encoding, the flag of the virtual display (Virtual Display), whether to project audio data, and the like.
  • the distribution mode of the video stream may include a broadcast mode, a distribution mode, a convergence mode, and the like. Broadcast mode can refer to only starting a single video stream and distributing it to multiple projection destinations with low latency.
  • the distribution mode may refer to enabling multiple video streams to be distributed to multiple different projection destinations.
  • Convergence mode may refer to enabling multiple video streams to be distributed to the same projection destination.
  • the specification of the video stream may refer to the resolution of the video encoder, such as 720P, 1080P, 2K, etc.
  • the encoding format of the video may be H.264 (Advanced Video Coding (AVC)), H.265 (High Efficiency Video Coding (HEVC)), and the like.
  • the first terminal saves a connection instance for each of the plurality of second terminals for receiving screen projection data from the second terminal.
  • the first terminal establishes a connection with each second terminal based on the obtained (eg, user input) IP address.
  • the window management module of the first terminal can transmit the obtained IP address of each second terminal to the network management module of the first terminal, and the network management module can communicate with each second terminal according to the obtained IP address.
  • the terminal establishes the connection (eg, step 4 in Figure 6).
  • the first terminal such as the network management module of the first terminal, can locally maintain an array, such as called array 2, and the array 2 includes the A connection instance (or referred to as an instance) corresponding to the IP addresses one-to-one is used to receive screen projection data from the corresponding second terminal.
  • the mobile phone 1 and the mobile phone 2 are used as the screen projection source ends, and the TV set is used as the screen projection destination end.
  • the TV displays the configuration interface 1 (the configuration interface 501 shown in FIG. 5 )
  • the user can input the mobile phone 1 in the configuration interface 1. and the IP address of phone 2.
  • the window management module of the TV can obtain the IP address of the mobile phone 1 and the IP address of the mobile phone 2 from the configuration interface 1 .
  • the TV set can save an array 1 locally.
  • the array 1 includes the IP address of mobile phone 1 and the IP address of mobile phone 2.
  • the window management module of the TV can create a view array according to the array 1.
  • the view array includes: the view corresponding to the IP address of mobile phone 1 in array 1, such as view 1, which is used to render the interface projected by mobile phone 1, and the view corresponding to the IP address of mobile phone 2 in array 1, such as view 2, which is used to render the mobile phone 2 Projected interface.
  • the window management module of the TV successfully creates the view 1 corresponding to the IP address of the mobile phone 1, the decoding parameters associated with the IP address of the mobile phone 1 are configured in the decoding module through the callback function, such as decoding parameter 1.
  • the decoding parameters associated with the IP address of the mobile phone 2 are configured in the decoding module through the callback function, such as decoding parameter 2.
  • the TV can configure different decoding parameters for mobile phone 1 and mobile phone 2 for decoding the screen projection data.
  • the network management module of the TV set can also maintain an array 2 locally.
  • the array 2 includes: a connection instance corresponding to the IP address of mobile phone 1 in array 1, such as connection instance 1, which is used to receive screen projection data from mobile phone 1, and a connection corresponding to the IP address of mobile phone 2 in array 1 Instances, such as connection instance 2, are used to receive screencast data from mobile phone 2.
  • the mobile phone 1 acquires the screen projection data 1 and sends it to the TV.
  • the mobile phone 2 acquires the screen projection data 2 and sends it to the TV.
  • the second terminal when the first terminal and the second terminal are connected, the second terminal can act as the screen projection source end to project the interface displayed on its display screen onto the display screen of the first terminal as the screen projection destination end show.
  • the conditions for the second terminal to start screen projection include not only successfully establishing a connection with the first terminal, but also receiving a corresponding user operation.
  • the user operation may be an operation that the user selects to start screencasting, such as a user's click operation on the start screencasting button.
  • the operation of selecting to start screencasting may be received by the second terminal before establishing the connection with the first terminal, or may be received after the connection with the first terminal is established. If the operation of selecting to start screencasting is received by the second terminal before establishing a connection with the first terminal, the second terminal can start screencasting after the second terminal successfully establishes a connection with the first terminal. If the operation of selecting to start screencasting is received by the second terminal after establishing the connection with the first terminal, the connection is successfully established between the second terminal and the first terminal, and the second terminal receives the operation of selecting to start screencasting After that, start screencasting.
  • the user operation may be an operation of the user confirming screen projection during the process of establishing a connection between the second terminal and the first terminal.
  • the second terminal may display a confirmation interface to ask the user whether to confirm whether to project the display interface of the second terminal to the first terminal for display.
  • the operation for confirming screen projection may be a click operation of the user on the confirmation screen projection button in the confirmation interface. Afterwards, after the second terminal successfully establishes a connection with the first terminal, the second terminal can start to perform screen projection.
  • the specific implementation of projecting the interface displayed on the display screen of the second terminal onto the display screen of the first terminal may be: the second terminal obtains the current display interface of the second terminal (the interface The data corresponding to the second interface) in this embodiment of the present application, such as screen projection data, can be sent to the first terminal for the first terminal to display corresponding content on its display screen, thereby realizing the display interface of the second terminal Projection display on the display screen of the first terminal.
  • mobile phone 1 and mobile phone 2 are used as the screen projection source, and the TV is used as the screen projection destination.
  • the above user operation is a wireless screen projection scenario. The example is executed before the mobile phone 1 and the mobile phone 2 are connected with the TV set.
  • the user can trigger the mobile phone 1 and the mobile phone 2 to start screen projection respectively.
  • the mobile phone 1 currently displays an interface 701
  • the mobile phone 2 currently displays an interface 702 .
  • the user can trigger the mobile phone 1 and the mobile phone 2 to display an interface including a start screen projection button, such as a configuration interface 2, so that the mobile phone 1 and the mobile phone 2 can be triggered to start screen projection.
  • a start screen projection button such as a configuration interface 2
  • the user can trigger the mobile phone 1 to display a configuration interface 801 , and the configuration interface 801 includes a start screen projection button 802 .
  • the user can perform a click operation on the start screen projection button 802 .
  • the mobile phone 1 receives the user's click operation on the start screen casting button 802 .
  • the mobile phone 1 can acquire the data corresponding to the current display interface 701 .
  • the mobile phone 1 can obtain the corresponding data of the current display interface 701 of the mobile phone 1 through the display management module of the mobile phone 1 (or called a display manager, DisplayManager, which can be a module of the framework layer of the mobile phone 1), such as the screen projection data 1.
  • the user can also trigger the mobile phone 2 to display the configuration interface 2 (eg, similar to the configuration interface 801 in FIG. 8 ).
  • the data corresponding to the current display interface 702 can be obtained.
  • the mobile phone 2 can obtain data corresponding to the current display interface of the mobile phone 2 through the display management module of the mobile phone 2 (or a display manager, which can be a module of the framework layer of the mobile phone 2 ), such as screen projection data 2 .
  • the TV set serving as the screen projection destination can establish connections with the mobile phone 1 and the mobile phone 2 respectively according to the IP addresses of the mobile phone 1 and the mobile phone 2 .
  • the mobile phone 1 can send the obtained screen projection data 1 to the TV set, so as to realize the projection display of the display interface 701 of the mobile phone 1 on the TV display screen.
  • the mobile phone 2 can send the obtained screen projection data 2 to the TV set, so as to realize the projection display of the display interface of the mobile phone 2 on the TV display screen.
  • a distributed multimedia protocol may be used to implement the projection display of the display interface of the second terminal on the display screen of the first terminal.
  • DMP distributed Multi-media Protocol
  • the second terminal may use the display management module of the second terminal to create a virtual display (Virtual Display). Afterwards, the second terminal may move the drawing of the interface displayed on the display screen of the second terminal to the VirtualDisplay. In this way, the second terminal can obtain the corresponding screen projection data. Afterwards, the second terminal may send the obtained screen projection data to the first terminal. For example, referring to FIG.
  • the second terminal after obtaining the screen projection data, can encode the screen projection data by the encoding module of the second terminal and send the data to the network management module of the second terminal.
  • the network management module of the second terminal may send the encoded screen projection data to the first terminal through the connection established with the first terminal.
  • wireless projection can also be used to realize the projection display of the display interface of the second terminal on the display screen of the first terminal, that is, the second terminal can obtain all layers of the display interface of the second terminal, and then All the obtained layers are integrated into a video stream (or called screencast data). After that, it can be encoded by the encoding module of the second terminal and sent to the network management module of the second terminal, so that the network management module adopts the real time streaming protocol (RTSP) protocol, through the connection established with the first terminal. sent to the first terminal.
  • RTSP real time streaming protocol
  • the above embodiments are described by projecting all the contents of the display interface on the display screen of the second terminal to the display screen of the first terminal for display as an example.
  • part of the content of the interface displayed on the display screen of the second terminal such as part of the elements of the interface, may also be projected onto the display screen of the first terminal for display.
  • the element to be projected to the first terminal may be a predetermined element in the interface, such as a video element.
  • the second terminal performs screen projection, only the layer where the predetermined element is located may be projected to the first terminal without projecting other layers. In this way, the private information on the second terminal can be protected from being displayed to the first terminal.
  • whether the second terminal only projects the layer where the predetermined element is located may be predefined by the system. For example, when the interface displayed on the display screen of the second terminal includes a predetermined element, the second terminal only projects the layer where the predetermined element is located to the first terminal; when the interface displayed on the display screen of the second terminal does not include the predetermined element , the second terminal projects all the contents of the current interface to the first terminal. Whether the second terminal only projects the layer where the predetermined element is located may also be set by the user. For example, continuing with reference to FIG. 8 , the configuration interface 801 further includes an option 803 for enabling layer filtering (this option 803 may be a layer filtering setting option in this embodiment of the present application).
  • the second terminal activates the layer filtering function, that is, the second terminal only projects the layer where the predetermined element is located to the first terminal;
  • the option 803 for enabling layer filtering is not selected in the interface 801, the second terminal projects the entire content of the current interface to the first terminal.
  • the specific implementation that the second terminal only projects the layer where the predetermined element is located may include: After the second terminal creates the VirtualDisplay, the second terminal, such as the display synthesis (surface Flinger) module of the second terminal (for example, the module of the application layer of the second terminal) can convert the interface displayed on the display screen of the second terminal layer by layer. Composite into VirtualDisplay. In the process of layer-by-layer synthesis, the surface Flinger module of the second terminal can determine whether the layer to be synthesized currently includes video elements.
  • the display synthesis (surface Flinger) module of the second terminal for example, the module of the application layer of the second terminal
  • the surface Flinger module of the second terminal can determine whether the layer to be synthesized currently includes video elements.
  • the second terminal may determine whether a video element is included in the layer according to the prefix of the layer name of the layer.
  • the prefix of the layer name of the layer where the video element is located is generally Surfaceview. Therefore, when the second terminal determines that the layer name of the layer to be synthesized currently has the prefix Surfaceview, it can determine that the layer includes the video element. When it is determined that the prefix of the layer name of the layer to be composited is not Surfaceview, it is determined that the layer does not include video elements.
  • the surface Flinger module of the second terminal only synthesizes layers including video elements into VirtualDisplay, and layers that do not include video elements are not synthesized into VirtualDisplay to obtain corresponding screen projection data. Wherein, the screen projection data only includes data corresponding to the layer where the video element is located, so as to realize the purpose of projecting only the video element to the first terminal.
  • the second terminal when the second terminal is currently playing sound, for example, when the user uses the second terminal to watch videos or listen to music, after the second terminal enables screen projection, the second terminal can not only display the current display
  • the interface is projected to the first terminal, and the audio can also be projected to the first terminal.
  • the above-mentioned screen projection data (such as screen projection data 1 or screen projection data 2) may include video data and audio data.
  • the video data is used for the first terminal to display the corresponding screen projection interface on the display screen of the first terminal, and the audio data is used for the first terminal to play the corresponding sound.
  • the specific acquisition process of the video data is the same as the process described in the above-mentioned embodiments to realize screen projection by using DMP or wireless projection.
  • the acquiring process of the audio data may be as follows: the second terminal may create an audio recording (AudioRecord) object in advance, and create a buffer (buffer). After the user triggers the second terminal to start screen projection, the second terminal may call the AudioRecord object. After the AudioRecord object is called, the audio data in the second terminal can be recorded. If the projected interface includes a video component, the audio in the video played in the video component can be recorded to obtain the audio data. Audio data will be stored in the created buffer. After that, the second terminal can obtain the audio data from the buffer and send it to the first terminal.
  • AudioRecord Audio recording
  • the second terminal may create an audio recording (AudioRecord) object in advance, and create a buffer (buffer). After the user triggers the second terminal to start screen projection, the second terminal may call the AudioRecord object. After the AudioRecord object is called, the audio data in the second terminal can be recorded. If the projected interface includes a video component, the audio in the video played in the video component can be recorded to obtain the audio
  • both the video data and the audio data may be screen-cast to the first terminal, or only the video data may be screen-cast to the first terminal without the audio data being screen-cast to the first terminal.
  • Whether or not to project audio data can be predefined by the system or set by the user.
  • the configuration interface 801 also includes an option 804 for enabling audio.
  • the second terminal screens both video data and audio data to the first terminal; when the user does not select the audio-enabled option 804 in the configuration interface 801 , the second terminal only projects the video data to the first terminal.
  • the TV respectively decodes the screen projection data 1 and the screen projection data 2 according to the configured corresponding decoding parameters.
  • the TV draws the screen projection interface 1 and the screen projection interface 2 by using the created corresponding views according to the decoded screen projection data 1 and the screen projection data 2, and displays them on the TV.
  • the screen projection interface 1 and the screen projection interface 2 may be the first interface in the embodiment of the application.
  • the first terminal may display screen projection interfaces corresponding to the plurality of second terminals one-to-one on the display screen of the first terminal according to the received screen projection data.
  • the TV receives the screen projection data 1, it can display the screen projection interface on the TV according to the screen projection data 1, such as the screen projection interface 1.
  • the content displayed in the screen projection interface 1 is the same as the All or part of the content of the display interface on the display screen of the mobile phone 1 is the same, or the content in the screen projection interface 1 is a mirror image of all or part of the content of the display interface on the display screen of the mobile phone 1 .
  • the TV after the TV receives the screen projection data 2, it can display the screen projection interface on the TV according to the screen projection data 2, such as the screen projection interface 2, the content displayed in the screen projection interface 2 is the same as the display screen of the mobile phone 2. All or part of the content of the display interface above is the same, or the content in the screen projection interface 2 is a mirror image of all or part of the content of the display interface on the display screen of the mobile phone 2 .
  • the first terminal correspondingly displays the screen projection interface on the first terminal.
  • the specific implementation may be: the network management module of the first terminal is in the After receiving the screen projection data from the second terminal, the screen projection data may be sent to the decoding module of the first terminal for decoding (eg, step 5 shown in FIG. 6 ). After the decoding module of the first terminal decodes the screen projection data by using the corresponding decoding parameters, it sends it to the window management module of the first terminal; the window management module of the first terminal uses the corresponding view, according to the received screen projection data. data can be drawn and displayed on the display screen of the first terminal corresponding to the screen projection interface (eg, step 6 in FIG. 6 ).
  • the network management module of the mobile phone 1 after the network management module of the mobile phone 1 sends the encoded screen projection data 1 to the TV through the connection established with the TV, the network management of the TV
  • the module can receive the encoded screen projection data 1.
  • the network management module of the TV can receive the encoded screen projection data 1 through the connection instance 1 in the array 2 maintained locally.
  • the network management module of the TV can determine that the IP address of the screen projection source is the IP address of the mobile phone 1 according to the connection instance 1 of the received data.
  • the network management module of the TV can send the encoded screen projection data 1 and the IP address of the mobile phone 1 to the decoding module of the TV.
  • the decoding module of the TV can obtain the corresponding decoding parameters according to the IP address of the mobile phone 1, such as obtaining the decoding parameter 1, and use the decoding parameter 1 to decode the screen projection data 1.
  • the decoding module of the TV can send the decoded screen projection data 1 to the window management module of the TV.
  • the window management module of the TV uses the view 1 corresponding to the IP address of the mobile phone 1 in the created view array to realize the drawing of the screen projection interface 1, as shown in (c) in Figure 7.
  • the screen projection interface 1 is displayed on the display screen of the TV.
  • the content in the screen projection interface 1 is the same as the content in the interface 701 displayed by the mobile phone 1 in (a) of FIG. 7 .
  • the network management module of the mobile phone 2 sends the encoded screen projection data 2 to the TV through the connection established with the TV
  • the network management module of the TV can use the connection instance 2 in the array 2 maintained locally.
  • the encoded screen projection data 2 is received.
  • the network management module of the TV can determine that the IP address of the screen projection source is the IP address of the mobile phone 2 according to the connection instance 2 of the received data. After that, the network management module of the TV can send the encoded screen projection data 2 and the IP address of the mobile phone 2 to the decoding module of the TV.
  • the decoding module of the TV can obtain the corresponding decoding parameters according to the IP address of the mobile phone 2 , such as obtaining the decoding parameter 2 , and use the decoding parameter 2 to decode the screen projection data 2 .
  • the decoding module of the TV can send the decoded screen projection data 2 to the window management module of the TV.
  • the window management module of the TV uses the view 2 corresponding to the IP address of the mobile phone 2 in the created view array to realize the drawing of the screen projection interface 2, as shown in (c) in Figure 7.
  • the screen projection interface 2 is displayed on the display screen of the TV.
  • the content in the screen projection interface 2 is the same as the content in the interface 702 displayed by the mobile phone 2 in (b) of FIG. 7 .
  • the window used by the first terminal to display the screen-casting interface may be referred to as a screen-casting window.
  • the window used for displaying the screen projection interface 1 may be referred to as the screen projection window 1
  • the window used for displaying the screen projection interface 2 may be referred to as the screen projection window 2 .
  • the first terminal may display a corresponding screen projection window after determining to be connected to the second terminal (such as the above-mentioned mobile phone 1 or mobile phone 2).
  • the first terminal may set the size and layout of the screen projection window corresponding to each second terminal according to the number of the second terminals serving as the screen projection source and the size of the display screen of the first terminal.
  • the number of the second terminals serving as the screen projection source is two.
  • screen projection windows corresponding to the two second terminals respectively can be displayed on the display screen of the first terminal.
  • the two screen projection windows may be arranged vertically or horizontally on the display screen of the first terminal.
  • the size of the two projection windows can be the same or different.
  • the screen projection window 1 corresponding to the mobile phone 1 and the screen projection window 2 corresponding to the mobile phone 2 are vertically arranged, and the screen projection window 1 and the screen projection window 2 have the same size.
  • the two screencasting windows may be displayed on the display screen of the first terminal at the same time, or may be in the order in which the screencasting starts corresponding to the second terminal, or the order in which the screencasting data corresponding to the second terminal is received by the first terminal successively displayed on the display screen of the first terminal.
  • the size of the screencasting window displayed first may be the same as the size of the display screen of the first terminal, and the screencasting window displayed later may be smaller than that of the first terminal.
  • the display screen is displayed in the form of a floating window above the projection screen displayed first.
  • the user's operation can be performed according to the user's operation (the operation may be the first one in this embodiment of the application). operation) to reduce, enlarge, switch the focus window, and close the corresponding screen projection window.
  • the operation may be a user's touch operation on the screen of the first terminal, or may be an operation input by the user using an input device of the first terminal (eg, a mouse, a keyboard of a PC; another example, a remote control of a TV).
  • the screen projection interface 1 and the screen projection interface 2 are displayed on the TV, the window for displaying the screen projection interface 1 is the screen projection window 1, and the window for displaying the screen projection interface 2 is the screen projection window 2 for example.
  • the user can use the remote control of the television to control the interface currently displayed on the television.
  • the television After receiving the user's control operation (eg, step 1 in FIG. 9 ), the television can determine whether the focus window needs to be switched (eg, step 2 in FIG. 9 ) according to the received control operation. Wherein, if the control operation is an operation of switching the focus window, it is determined that the focus window needs to be switched.
  • the operation of switching the focus window may be the user's operation of the left button or the right button of the remote control. That is, if the control operation received by the television is an operation of the left or right button of the remote control, the television may determine that the focus window needs to be switched, and the television may switch the focus (eg, step 3 in FIG. 9 ).
  • the television set may locally save a focus window variable, and the focus window variable is used to indicate which window is the focus window among the multiple screen projection windows currently displayed.
  • the operation of switching the focus of the television set may include that the television set updates the focus window variable from identifier 1 to identifier 2.
  • the identifier 1 is the identifier of the screen projection window that is the focus window before the focus is switched
  • the identifier 2 is the identifier of the screen projection window that is the focus window after the focus is switched.
  • the screen projection window of one of the screen projection interfaces can be the focus window by default.
  • the TV is used to display screen projection interface 1 by default.
  • the projection window 1 is the focus window.
  • the television set can display a prompt sign 1001 for prompting the user that the current screen projection window 1 is the focus window.
  • the TV set can also set the focus window variable as the identifier of the screen projection window 1, which is used to indicate that the screen projection window 1 is the focus window.
  • the TV can determine that the focus window needs to be switched, and then the TV updates the focus window variable from ID 1 to ID 2 of the projection window 2, which is used to indicate the projection window. 2 is the current focus window.
  • the TV can update the position of the prompt mark 1001 on the TV display screen, that is, slide from the position of the screen projection window 1 to the position of the screen projection window 2, so as to remind the user of the current screen projection window 2 the focus window.
  • the TV can determine whether the current focus window needs to be enlarged according to the received control operation and in combination with the size of the current focus window (eg, step 4 in FIG. 9 ).
  • the control operation is a selection operation of the focus window, for example, the selection operation may be an operation of determining a button on the remote control, and when the current focus window is not a maximized window, the TV can enlarge the current focus window.
  • the TV can hide them (eg, step 5 in Figure 9). It can be understood that the size of the screen-casting interface changes with the size of the screen-casting window. The screen-casting interface is also hidden along with the screen-casting window.
  • the current focus window is the screen-casting window 1 .
  • the TV receives the user's operation of the remote control confirmation button, and the TV determines the current focus window, that is, the projection window 1 is not the maximized window, the TV can maximize the projection window 1 and hide other projection windows. (i.e. hide the projection window 2).
  • the television may determine the enlarged size of the current focus window according to the size of the display screen of the television, for example, the enlarged size is the same as the size of the display screen of the television.
  • the TV can determine whether the current focus window needs to be reduced according to the received control operation and in combination with the size of the current focus window (eg, step 6 in FIG. 9 ). Wherein, if the control operation is an operation of determining a button on the remote control, and the current focus window is a maximized window, the TV can reduce the current focus window and display other non-focus windows (eg, step 7 in FIG. 9 ) . For example, the TV currently displays the maximized projection window 1, and the projection window 2 is hidden.
  • the TV can reduce the projection window 1 and display the The other screen-casting windows that are hidden, that is, the screen-casting window 2 is displayed.
  • the TV may determine the reduced size of the current focus window according to the display size of the TV and the number of other hidden projection windows, such as the reduced size and other hidden projection windows.
  • the windows are the same size, and the sum of all projection windows is the same size as the TV display.
  • the TV set may update the screen projection interface in the current focus window according to the received control operation (eg, step 8 in FIG. 9 ).
  • the control operation may be an operation for operating the screen projection interface (this operation can be implemented in this application. the second operation in the example).
  • the TV can send the control operation to the projection source corresponding to the current focus window, so that the projection source can execute the corresponding event according to the received control operation, and update the interface displayed by the projection source (
  • the updated interface of the screen projection source end may be the third interface in this embodiment of the present application).
  • the screen projection source end can project the updated interface to the screen projection destination end, such as a TV set, that is, the screen projection source end can obtain new screen projection data and send it to the TV set.
  • the TV receives the updated screen projection data, it can update the screen projection interface in the current focus window according to the new screen projection data (the updated screen projection interface of the TV can be the fourth interface in the embodiment of the application) .
  • the current focus window is screen projection window 1.
  • the content of the screen-casting interface 1 in the screen-casting window 1 is PPT.
  • the TV can send the operation of the up or down button on the remote control to the mobile phone 1 corresponding to the screen projection window 1 .
  • the mobile phone 1 can perform page-up or page-down operations on the PPT according to the operation, and can acquire new screen projection data and send it to the TV.
  • the TV can update and display the screen projection interface 1 in the screen projection window 1 according to the new screen projection data.
  • the mobile phone 1 acquires and sends new screen projection data, and the TV receives the new screen projection data and displays the screen projection interface according to the new screen projection data.
  • the specific implementation is the same as that in the above embodiment, the corresponding processes in S403-S406 The implementation is similar and will not be described in detail here.
  • the control operation used to operate the screen projection interface may also be other operations, such as an operation on an operable element in the screen projection interface. If the control operation is an operation on an operable element in the screen projection interface, the TV can not only send the operation to the corresponding screen projection source terminal, but also send the operation position of the operation in the screen projection interface to the projection screen. screen source. According to the operation position, the screen projection source can determine which element in the current display interface the user is operating, and then execute the corresponding event according to the received operation and the determined element to be operated, and update the screen displayed by the projection source. interface.
  • the first terminal can also dynamically adjust the size and arrangement of the screen projection windows corresponding to each second terminal displayed by the first terminal according to the number of the second terminals serving as the screen projection source.
  • the number of the second terminals serving as the screen projection source end can be dynamically increased or decreased.
  • the first terminal has established connections with multiple second terminals, and the first terminal currently displays screen projection windows corresponding to the multiple terminals respectively.
  • the first terminal When the first terminal is disconnected from one of the second terminals, or the first terminal receives the user's operation to close a screen projection window (for example, when a screen projection window is the focus window, the TV receives the user's return to the remote control) key operation), that is, the number of second terminals serving as the source of screen projection decreases, the first terminal can stop displaying the screen projection window corresponding to the disconnected second terminal, and adjust the number of remaining connected second terminals according to the number of remaining connected second terminals.
  • Each second terminal corresponds to the size and arrangement of the screen projection window.
  • the first terminal can increase the display corresponding to the new second terminal. screen projection window, and adjust the size and arrangement of the screen projection window corresponding to each second terminal according to the number of second terminals currently serving as the screen projection source end.
  • the examples in the above embodiments are described by taking the implementation of many-to-one screen projection as an example in a wireless screen projection scenario.
  • the many-to-one screen projection method in this embodiment may also be applied to a cross-device dragging scenario.
  • the specific implementation of many-to-one screen projection is similar to the implementation in S401-S406 above, with the following differences:
  • the timing of creating a view and configuring decoding parameters for a first terminal may be executed after the connection with a corresponding second terminal, such as mobile phone 1 and mobile phone 2, is successfully established, or the first terminal determines the corresponding second terminal. It will be executed after the terminal starts to cast the screen.
  • a first terminal such as a TV set
  • the second terminal determines that the user triggers the cross-device dragging
  • it may send corresponding dragging data to the first terminal.
  • the drag data may be used to indicate that the drag data is related data in the drag start event.
  • the indication may identify the start of the dragging.
  • the first terminal may determine that the second terminal will start screen projection.
  • the TV may create a view corresponding to the second terminal, and configure decoding parameters corresponding to the second terminal.
  • the conditions for the second terminal to start screen projection include not only successfully establishing a connection with the first terminal, but also determining that the user's dragging intention is cross-device dragging.
  • the object dragged by the user may be an interface displayed by the second terminal, or an element in the interface (such as a video element, a picture-in-picture, or a floating window).
  • the second terminal can determine whether the user's intention to drag is not. It is dragging across devices. If it is determined that the user's intention to drag the element is dragging across devices, screen projection can be started.
  • the second terminal may set a drag-aware area to determine whether the user's drag intention is to drag across devices.
  • the drag sensing area may be an area on the display screen of the second terminal at a predetermined distance from the edge of the display screen. The predetermined distance may be predefined, or a setting interface may be provided for the user to set.
  • the drag sensing area of the second terminal may be one or multiple.
  • a transparent view control is set at the drag-aware area. After a dragged object, such as an interface or an element in the interface, is dragged into the drag-aware area, the view control set in the corresponding area can detect the dragging of the element. When the presence of the view control detects that the element is dragged in, the second terminal can determine that the user's drag intention is to drag across devices.
  • a dragged object such as an interface or an element in the interface
  • the second terminal can project the display interface (the display interface can be the second interface in the embodiment of the application) to the first terminal.
  • the specific implementation is similar to the implementation of the second terminal projecting the display interface to the first terminal in the wireless screen projection scenario in S403 and S404.
  • the second terminal may only project the element to the first terminal.
  • the second terminal may obtain the layer name (or layer name, layer Name) of the element in the currently displayed interface.
  • the second terminal can determine whether the layer name of the layer currently to be synthesized is the same as the acquired layer name. If the same, the second terminal composites the layer into the VirtualDisplay. If not, the second terminal does not synthesize the layer into the VirtualDisplay, so as to achieve the purpose of projecting only the element dragged by the user to the first terminal.
  • the second terminal may display the dragged object on the first terminal after receiving the user's drag release operation. .
  • a part of the area of the dragged object is displayed on the display screen of the second terminal, and another part of the area is hidden (or overflows the display screen).
  • the object is also displayed. Specifically: for the dragged object, a part of the area is displayed on the second terminal, and another part of the area (the area overflowing the second terminal) is displayed on the first terminal.
  • the specific implementation of displaying the dragged object on the first terminal and the second terminal at the same time may be: after the screen projection is started, the second terminal not only needs to send a message to the first terminal To cast screen data, it is also necessary to send the rectangle (rect) information of the dragged object to the first terminal, and a certain corner of the object during the dragging process (such as any one of the upper left corner, lower left corner, upper right corner and lower right corner). corner) coordinate information, that is, the data sent by the second terminal to the first terminal includes screen projection data, rectangle information of the dragged object, and coordinate information of a certain corner of the object during the dragging process.
  • the rectangle information of the object includes coordinate information of the four corners of the upper left corner, the upper right corner, the lower left corner and the lower right corner of the object when the dragging starts.
  • the first terminal can determine whether the object has an area overflowing the display screen of the second terminal according to the rectangle information of the object, the coordinate information of a corner of the object during the dragging process, and the resolution of the second terminal. If the existence area of the object overflows the display screen of the second terminal, the first terminal can determine whether the object can be displayed on the first Information about the area corresponding to the object displayed on the display screen of the terminal (the area is the same as the area where the object overflows the display screen of the second terminal).
  • the resolution of the second terminal may be sent by the second terminal to the first terminal during the process of establishing a connection between the first terminal and the second terminal, or after the connection is successfully established.
  • the first terminal may display the content of the area corresponding to the object on the display screen of the first terminal according to the determined area information and screen projection data.
  • the TV obtains the IP address 1 of the mobile phone 1 and establishes a connection with the mobile phone 1 .
  • the TV creates a view corresponding to IP address 1, eg, view a.
  • the TV sets the decoding parameters associated with IP address 1, such as decoding parameters a.
  • the TV saves the connection instance a corresponding to the IP address 1, and is used to receive the screen projection data from the mobile phone 1.
  • the user opens the video application of the mobile phone 1 to play the video X.
  • the mobile phone 1 receives an operation triggered by the user to drag up the video element 1201 for presenting the video X.
  • the mobile phone 1 can drag the video element 1201 up, and can also perform background blur processing.
  • the mobile phone 1 receives the user's drag operation on the dragged video element 1201 .
  • the mobile phone 1 makes the video element 1201 move on the display screen of the mobile phone 1 following the movement of the user's finger, giving the user a visual effect of the video element 1201 being dragged by the user's finger.
  • the dragging direction of the video element 1201 may be upward dragging, leftward dragging, rightward dragging, and downward dragging.
  • the user may use a finger to perform a drag operation on the dragged video element 1201 , such as an operation of long pressing and moving the finger to the right.
  • the phone can draw and display an animation of the video element 1201 as the user's finger moves.
  • the mobile phone 1 can determine whether the user's drag intention is a cross-device operation.
  • the mobile phone 1 After the mobile phone 1 determines that the user's drag intention is a cross-device operation, the mobile phone 1 can create a virtual display, and draw the layer where the video element 1201 in the current interface is located on the virtual display to obtain screen projection data, such as called Screen projection data a.
  • the mobile phone 1 can encode the screen projection data a and send it to the TV.
  • the mobile phone 1 can also send the rectangle information of the video element 1201 and the coordinate information of a certain corner (eg, upper left corner) of the video element 1201 to the TV set during the dragging process.
  • the TV can receive the encoded screen projection data a, the rectangle information of the video element 1201, and the coordinate information of the upper left corner of the video element 1201 during the dragging process through the connection instance a.
  • the received rectangle information of the video element 1201 the coordinate information of the upper left corner of the video element 1201 and the resolution of the mobile phone 1 during the dragging process, after determining that the area where the video element 1201 exists overflows the display screen of the mobile phone 1, the television
  • the rectangle information of the video element 1201 the coordinate information of the upper left corner of the video element 1201 and the resolution of the mobile phone 1 during the dragging process, the information of the area corresponding to the video element 1201 that can be displayed on the display screen of the TV can be determined.
  • the TV can determine that the IP address of the screen projection source is the IP address 1 of the mobile phone 1 .
  • the TV can decode the received screen projection data a by using the encoding parameter a corresponding to the IP address 1 according to the IP address 1.
  • the TV can create a view a corresponding to IP address 1 to realize screen projection Drawing of interface 1. As shown in (a) of FIG.
  • the screen projection interface 1 is displayed on the display screen of the TV, and the content in the screen projection interface 1 and the video X carried in the video element 1201 of the mobile phone 1 overflow the content of the mobile phone display screen same.
  • the mobile phone 1 can acquire the screen projection data a and the coordinate information of the upper left corner of the video element 1201 during the dragging process in real time, and send them to the TV.
  • the TV can update the screen projection interface 1 in real time according to the received data.
  • the TV can display the screen projection interface 1 in full screen on the display screen of the TV set according to the screen projection data a received in real time.
  • the content in the screen projection interface 1 is the same as the entire content of the video X carried in the video element 1201 .
  • the TV obtains the IP address 2 of the mobile phone 2 and establishes a connection with the mobile phone 2 .
  • the TV creates a view corresponding to IP address 2, eg, view b.
  • the TV configures decoding parameters associated with IP address 2, such as decoding parameters b.
  • the TV saves the connection instance b corresponding to the IP address 2 for receiving the screen projection data from the mobile phone 2 .
  • the user opens the fitness application of the mobile phone 2 to view the fitness video.
  • the mobile phone 2 receives the user's drag operation on the video element bearing the fitness video.
  • the mobile phone 2 makes the video element move on the display screen of the mobile phone 2 following the movement of the user's finger, giving the user a visual effect that the video element is dragged by the user's finger.
  • the mobile phone 2 can determine whether the user's drag intention is a cross-device operation.
  • the mobile phone 2 After the mobile phone 2 determines that the user's drag intention is a cross-device operation, the mobile phone 2 can create a virtual display, and draw the layer where the video element in the current interface is located on the virtual display to obtain screen projection data, such as called projection screen data b.
  • the mobile phone 2 can encode the screen projection data b and send it to the TV.
  • the mobile phone 2 can also send the rectangle information of the video element and the coordinate information of a certain corner (eg, upper left corner) of the video element to the TV set during the dragging process.
  • the TV can receive the encoded screen projection data b, the rectangle information of the video element, and the coordinate information of the upper left corner of the video element during the dragging process through the connection instance b.
  • the TV set can The rectangle information of the video element, the coordinate information of the upper left corner of the video element during the dragging process, and the resolution of the mobile phone 2 determine the information corresponding to the area of the video element that can be displayed on the display screen of the TV.
  • the TV can determine that the IP address of the screen projection source is the IP address 2 of the mobile phone 2 .
  • the TV can decode the received screen projection data b by using the encoding parameter b corresponding to the IP address 2 according to the IP address 2 .
  • the TV can create a view b corresponding to the IP address 2 to realize the screen projection interface 2 drawing.
  • the TV can simultaneously display the screen projection interface 1 and the screen projection interface 2 on the TV display screen. For example, a screen projection interface 1 is currently displayed in full screen on the TV.
  • the TV set may display the screen projection interface 2 in the form of a small window (or picture-in-picture, floating window) on the display screen of the TV set.
  • the content in screen interface 2 is the same as the content of the fitness video of mobile phone 2 overflowing the mobile phone display.
  • the mobile phone 2 can acquire the screen projection data b and the coordinate information of the upper left corner of the video element in the dragging process in real time, and send it to the TV. In this way, the TV can update the screen projection interface 2 in real time according to the received data.
  • the TV can continue to display the screen-casting interface 2 in the form of a small window on the display screen of the TV according to the screen-casting data b received in real time.
  • the content in the screen projection interface 2 is the same as the entire content of the fitness video displayed on the mobile phone 2 .
  • the TV set may set the projection window of one of the projection interfaces as the focus window by default, for example, the TV defaults a small window as the focus window.
  • the TV displays a prompt sign 1301 for prompting the user of a small window, that is, the screen-casting window of the screen-casting interface 2 is the focus window.
  • the user can use the remote control of the TV to select and switch the focus window, and can also switch the layout of the large and small windows (wherein the window used for displaying the screen projection interface 1 in full screen may be called a large window), and can also close the large and small windows.
  • the TV receives the user's operation of the left button or the right button of the remote control, it switches the focus window.
  • the TV can display the small window, that is, the screen projection interface 2 in full screen, and The large window, that is, the screen projection interface 1 is displayed in the form of a small window.
  • the TV can stop displaying the small window, or close the small window, and the TV can also notify the mobile phone 2 corresponding to the small window to stop projecting the screen.
  • the TV can stop displaying the large window, and the TV can also notify the mobile phone 1 corresponding to the large window to stop projecting the screen.
  • the object dragged by the user is an interface displayed by the second terminal, or an element in the interface such as a video element, a picture-in-picture, or a floating window as an example.
  • the object dragged by the user may also be a UI control in an interface displayed by the second terminal.
  • the dragged UI control can be defined by a third-party application, selected by the user, or recommended by the system.
  • the specific implementation of many-to-one screen projection is similar to the implementation of the dragged object as an interface or an element in the interface. The differences are:
  • the second terminal does not acquire screen projection data and send it to the first terminal for realizing screen projection. Instead, after the screen projection is started, the second terminal obtains data, such as the instruction stream of the current interface, and sends the instruction stream to the first terminal.
  • the second terminal may also send the identifier of the dragged UI control (that is, the above-mentioned data may also include the identifier of the dragged UI control) to the first terminal.
  • the first terminal can extract the canvas instruction of the dragged UI control from the received instruction stream, so as to realize the dragged UI control according to the canvas instruction display of the UI controls on the first terminal.
  • the screen projection of the UI control on the first terminal in the interface currently displayed by the second terminal (the interface may be the second interface in the embodiment of the application) is realized.
  • the UI control displayed on the first terminal may be the first interface in this embodiment of the application.
  • the first terminal and the second terminal may further include an instruction management module.
  • the command management module of the second terminal may be responsible for extracting the content of the source end interface of the projection screen, that is, responsible for obtaining the command stream of the current interface.
  • the command management module of the first terminal may be responsible for restoring the content of the source end of the screen projection, for example, drawing corresponding UI controls according to the command stream.
  • the second terminal acquires data, such as the 2D drawing instruction and the identifier of the dragged UI control, and sends it to the first terminal.
  • the first terminal draws the dragged UI control to the display screen of the first terminal according to the received 2D drawing instruction and logo and according to the corresponding layout file, that is, realizes the UI that is dragged by the user in the interface displayed by the second terminal Display of controls on the first terminal.
  • the layout file also includes other configurations of the drawing area (such as configurations such as positions and styles corresponding to the identifiers of UI controls).
  • the first terminal reads the configuration corresponding to the logo from the layout file according to the received 2D drawing instruction and logo to realize the drawing and layout of UI controls on the display screen of the first terminal.
  • the data used to realize the screen projection of the second terminal on the first terminal can be understood as video data, or includes video data, so it can be
  • the channel used for transmitting screen projection data between the first terminal and the second terminal is called a video channel, or a video transmission channel.
  • the data used to implement screen projection by the second terminal on the first terminal is an instruction stream.
  • the above-mentioned video channel may continue to be used to implement the transmission of the instruction stream.
  • an instruction channel, or referred to as an instruction transmission channel may also be used to implement the transmission of the instruction stream. That is to say, in this embodiment, multiple instruction streams can be supported to be projected to one screen projection destination, such as the screen of the first terminal, so as to realize many-to-one projection.
  • the first terminal can create a canvas corresponding to each second terminal (the canvas can be This is the drawing component in this embodiment of the application), which is used to implement the projection of the UI controls of the second terminal on the first terminal.
  • the process for the first terminal to project multiple instruction streams onto one screen may include: after the second terminal is connected to the first terminal, or after the second terminal is connected to the first terminal and starts screen projection, The first terminal creates a canvas corresponding to the second terminal for carrying (or drawing) the UI controls projected by the second terminal (eg, step 1 in FIG. 15 ).
  • the first terminal draws corresponding content on the corresponding canvas according to the instruction stream from each second terminal and the identifier of the dragged UI control (eg, step 2 in FIG. 15 ).
  • the first terminal synthesizes the canvases corresponding to the second terminals into one canvas (eg, step 3 in FIG. 15 ).
  • the first terminal displays the synthesized canvas on the screen of the first terminal (eg, step 4 in FIG. 15 ).
  • FIG. 16 when there is only one second terminal as the screen projection source, only the canvas corresponding to the second terminal is displayed on the screen of the first terminal (as shown in (a) of FIG. 16 ). Contents of canvas 1).
  • the canvases corresponding to the two second terminals can be displayed on the screen of the first terminal according to the corresponding layout. For example, the screen of the first terminal is divided into two areas, one area is used to display the content of the canvas corresponding to one of the second terminals (canvas 1 in FIG. 16(b)), and the other area is used to display The content of the canvas corresponding to another second terminal (canvas 2 in (b) of FIG. 16 ) is displayed.
  • the canvases corresponding to the multiple second terminals can be displayed on the screen of the first terminal according to the corresponding layout.
  • the screen of the first terminal can be divided into a corresponding number of area, respectively used to display the content of the canvas corresponding to each second terminal.
  • the layout of the multiple canvases on the screen of the first terminal may be predetermined or set according to the user's settings, for example, the multiple canvases are divided into horizontal equal parts, vertical equal parts, picture-in-picture, three, etc.
  • the layout is in the form of division, quarter division, etc. on the screen, and is not limited to the horizontal division layout shown in (b) in FIG. 16 .
  • mobile phone 1 and mobile phone 2 are used as the screen projection source, the TV is used as the screen projection destination, and the dragged UI control is selected by the user.
  • the implementation process of many-to-one screen projection in the scenario of dragging and dropping UI controls across devices is introduced as an example.
  • network monitoring can be activated to monitor connection requests.
  • the TV can also broadcast its own IP address for other devices to initiate connection requests.
  • the mobile phone 1 receives the IP address of the TV.
  • the mobile phone 1 can initiate a connection request according to the IP address of the TV set to request to establish a connection with the TV set.
  • the TV can obtain the IP address 1 of the mobile phone 1 .
  • the TV can start the distribution function, such as creating a canvas corresponding to IP address 1, such as canvas x, and configuring the decoding parameters associated with IP address 1, such as decoding parameter x , and save the connection instance x corresponding to IP address 1, which is used to receive data from mobile phone 1, such as the command stream, the identifier of the UI control being dragged, etc., so as to prepare for the screen projection of mobile phone 1.
  • the television can also notify the mobile phone 1 that it is ready.
  • the user can trigger the mobile phone 1 to start screen projection by dragging and dropping the UI controls in the current display interface of the mobile phone 1 .
  • the mobile phone 1 displays a shopping details page 1701 of the shopping application.
  • the mobile phone 1 receives the user's drag operation on the UI controls in the shopping details page 1701 .
  • the dragging operation may include: an operation of the user selecting a UI control and an operation of triggering the movement of the selected UI control.
  • Take the dragged UI controls including: product preview control 1702, product price control 1703, product introduction control 1704, add to cart button 1705 and buy now button 1706 on the shopping details page 1701 as an example.
  • the mobile phone 1 in response to the drag operation, can display an animation of the corresponding UI control moving with the movement of the user's finger, giving the user a visual effect of the UI control being dragged by the user's finger.
  • the mobile phone 1 can determine whether the user's dragging intention is a cross-device operation. After the mobile phone 1 determines that the user's drag intention is a cross-device operation, the mobile phone 1 can start the command capture. For example, the mobile phone 1 can perform command extraction on the shopping details page 1701 to obtain the instruction stream corresponding to the shopping details page 1701, as described in is the instruction stream x.
  • the instruction stream x may include information such as the canvas instruction of each UI control in the current interface, the layer name, and the identifier of the control.
  • the mobile phone 1 can encode the instruction stream x and send it to the TV.
  • the mobile phone 1 can also send the identifier of the dragged UI control to the TV.
  • the identifier of the control may be a specific field identifier (eg, dup ID) defined by the application developer.
  • the mobile phone 1 can identify the type of UI control dragged by the user through UI control identification.
  • the mobile phone 1 can determine the ID of the dragged UI control according to the identified type of UI control.
  • the types of the controls correspond to the identifiers one-to-one, and the corresponding relationship is pre-stored in the mobile phone 1 .
  • an artificial intelligence (artificial intelligence) recognition method can be used to recognize the type of UI control dragged by the user.
  • each interface of each application in the mobile phone can be obtained in advance, for example, the whole frame image data of the product detail page 1701 can be obtained by taking a screenshot, and the target detection technology in machine learning (such as R-CNN, Fast-R-CNN YOLO and other model algorithms) locate the area of each UI control in the product detail page 1701, and then match the located area and type of each UI control in the product detail page 1701 with the The identifier of the product detail page 1701 is correspondingly stored in the mobile phone 1 .
  • machine learning such as R-CNN, Fast-R-CNN YOLO and other model algorithms
  • the mobile phone After receiving the user's operation of dragging the UI controls in the product detail page 1701, the mobile phone can identify the user dragging the UI controls according to the position touched by the user when selecting the UI controls and the stored area of each UI control in the product detail page 1701. The type of UI control to drag. For another example, after receiving the user's operation of dragging the UI control on the product details page 1701, the UI control selected by the user can be drawn, and then the target classification technology in machine learning (such as the ResNet model algorithm) can be used to identify The type of UI control drawn.
  • the target classification technology in machine learning such as the ResNet model algorithm
  • the TV can receive the encoded instruction stream x and the identifier of the dragged UI control through the connection instance x.
  • the TV set can determine that the IP address of the source end of the screen projection is the IP address 1 of the mobile phone 1 according to the connection instance x of the received data.
  • the TV set can decode the received instruction stream x by using the encoding parameter x corresponding to IP address 1.
  • the TV can create a canvas x corresponding to the IP address 1 to realize the drawing and display of the dragged UI control on the TV screen. For example, after the user releases the drag, as shown in (a) of FIG.
  • the TV can display the screen projection interface x.
  • the content in the screen projection interface x is the same as the UI control dragged by the user in the product detail page 1701 displayed on the mobile phone 1 .
  • the TV when it implements the drawing of UI controls on the canvas, it may draw each UI control according to a preconfigured layout file.
  • the layout file includes the configuration of the drawing area of each UI control (for example, including the configuration of the identifier, position and style of the UI control), and the drawing area of each UI control does not overlap.
  • the drawing area of each UI control in the layout file may not correspond to the area of the corresponding UI control in the original interface, that is, through the layout file, the UI controls can be rearranged.
  • the layout file can be a system developer or an application developer using Android generated by studio. if using android Studio can realize the capture and preview display of UI control related layouts. System developers or application developers can adjust the layout of UI controls in the preview, and can generate layout files according to the final layout.
  • the user can project the UI controls in the interface displayed on the mobile phone 2 to the TV for display by dragging and dropping.
  • the specific implementation is similar to the display of the UI controls in the display interface of the mobile phone 1 projected on the TV, and will not be repeated here.
  • the mobile phone 2 displays a shopping details page 1901 of the shopping application.
  • the user performs a drag operation on the UI controls in the shopping details page 1901 .
  • the dragged UI controls include: product preview control 1902 on the shopping details page 1901 , product price control 1903 , product introduction control 1904 , add to cart button 1905 and buy now button 1906 .
  • the TV set can decode the received instruction stream (eg, instruction stream y) using the corresponding encoding parameter (eg, encoding parameter y).
  • the TV can use the created corresponding canvas (eg canvas y) to realize the drawing of the dragged UI control on the mobile phone 2 .
  • the TV also draws the dragged UI controls on the mobile phone 1 on the canvas x.
  • the TV can combine canvas x and canvas y into one canvas and display it on the TV screen. For example, as shown in (b) of FIG. 18 , the TV can display a screen projection interface x and a screen projection interface y.
  • the content in the screen projection interface x is the same as the UI control dragged by the user in the product details page 1701 displayed by the mobile phone 1
  • the content in the screen projection interface y is the same as the user dragged in the product details page 1901 displayed by the mobile phone 2.
  • UI controls are the same.
  • the television set may by default set the projection window of one of the screen projection interfaces as the focus window.
  • the focus position may specifically be a UI control in the screen-casting interface presented by the screen-casting window.
  • the focus position of the television is the product preview control 1801 of the screen projection interface x.
  • the user can choose to switch the focus position using the remote control of the TV. For example, if the TV receives the user's operation of the left button, right button, up button or down button of the remote control, it can switch the focus position. For example, in conjunction with (b) in FIG.
  • the TV receives the user’s operation of the right button of the remote control, then as shown in (c) in FIG. 18 , the TV sets the focus position from the product preview of the screen projection interface x Control 1801, switch to product preview control 1802 of screen projection interface y. After that, when the TV receives the user's operation of pressing the down button on the remote control, as shown in (d) of FIG. 18 , the TV switches the focus position from the product preview control 1802 of the screen projection interface y to the screen projection interface y The add to cart button 1803.
  • Users can also use the remote control of the TV to achieve reverse control.
  • the television can obtain the location information of the operation.
  • the TV can determine that the position (such as coordinates) of the operation corresponds to the original position (such as coordinates) in the interface of the mobile phone, so as to determine that the user wants to operate What is the UI control on the phone.
  • the TV can send the corresponding operation instructions to the mobile phone, so that the mobile phone can respond accordingly, so as to realize the reverse control.
  • the mobile phone can re-project the updated interface content to the TV, so that the TV can update the corresponding projection interface.
  • the focus position is the product preview control 1801 of the screen projection interface x.
  • the television receives the user's operation of the confirmation button of the remote control.
  • the television can determine that the user wants to operate the product preview control on the mobile phone 1 according to the current focus position and layout.
  • the television set can send the corresponding operation instruction to the mobile phone 1 .
  • the mobile phone 1 can respond accordingly according to the operation instruction, such as playing a product preview video.
  • the mobile phone 1 can also record the played video and send it to the TV.
  • the TV can play the preview video of the product in full screen.
  • the mobile phone may not project the updated interface to the TV.
  • the user can continue to operate on the phone.
  • the focus position is the buy now button 2001 of the screen-casting interface x.
  • the television receives the user's operation of the confirmation button of the remote control. According to the current focus position and the stored correspondence, the television can determine that the user wants to operate the buy now control on the mobile phone 1 .
  • the television set can send the corresponding operation instruction to the mobile phone 1 . As shown in (b) of FIG.
  • the mobile phone 1 after receiving the operation instruction, can display a purchase interface 2002 .
  • the user can continue to operate on the mobile phone 1 .
  • the TV set can also set the screen projection interface x corresponding to the mobile phone 1 to gray, and can also display prompt information 2003, such as the words "continue to operate on the mobile phone", to prompt Users can continue to operate on the mobile terminal.
  • prompt information 2003 such as the words "continue to operate on the mobile phone"
  • the user can switch back to the television to continue the operation.
  • the prompt message 2003 also includes the words "to exit, please press the "return” key".
  • the above screen projection application can realize many-to-one from multiple screen projection sources to one screen projection destination. screencast.
  • multiple mobile phones and tablet computers can project the content (such as PPT, broadcast video) on their display screens to the same large-screen device for presentation, realizing many-to-one. 's screencast.
  • the efficiency of collaborative use of multiple devices is improved, and the user experience is improved. It allows users to control the screen-casting interface using the input device of the screen-casting destination, and can also realize the reverse control of the screen-casting source.
  • the screen projection destination can also adjust the layout of the presented screen projection interface according to the increase or decrease of the source device, so as to present the best visual effect to the user.
  • layer filtering is supported, so that the layers where some elements in the current interface (such as elements dragged by the user, or predetermined elements) are located are projected to the screen projection destination. In this way, it can be ensured that the private information of the screen projection source end is not projected to the screen projection destination end, and the privacy of the user is protected.
  • the content to be projected can be replaced from a pure video stream to an instruction stream, which can improve the display effect of the projection interface at the destination end of the projection screen and save transmission bandwidth.
  • each meeting needs to carry various devices and cables and prepare in advance. This reduces meeting efficiency and increases communication costs for cross-regional office work.
  • the many-to-one screen projection solution provided in this embodiment can be combined with a smooth call to realize cross-regional office work. This cross-regional office method can improve meeting efficiency and save communication costs for cross-regional office work.
  • Changlian Call realizes high-definition audio and video calls between multiple devices. You can make video calls between mobile phones, mobile phones, large-screen devices, smart speakers with screens and other devices, and you can freely connect between these devices and choose the best device. Answering, bringing consumers a smoother and more free call experience. At the same time, it provides users with a good audio and video call experience, and can realize 1080P high-definition video calls, and can maintain smoothness in the case of dark light and poor network quality (such as subway or high-speed rail scenes).
  • dark light and poor network quality such as subway or high-speed rail scenes.
  • Region A includes a first terminal, such as large-screen device A.
  • Region B includes a third terminal, such as large-screen device B.
  • Large-screen device A communicates with large-screen device B smoothly.
  • the large-screen device A displays the site picture of the region B, and can also display the site picture of the local (that is, the region A).
  • the large-screen device B displays a picture of a conference site in region A, and can also display a picture of a conference site in the local area (ie region B).
  • the large-screen device displays the conference site picture of the other party's conference site, which is drawn by the large-screen device according to the video data collected in real time by the opposite-end large-screen device.
  • the local venue screen displayed by the large-screen device is drawn based on the video data collected in real time by itself.
  • the video data collected in real time can be transmitted between the large-screen devices through the far-field data channel established between them.
  • Participants in region A can project documents displayed on one or more second terminals, such as mobile phone 1 and mobile phone 2 (for example, document 1 and document 2, respectively) using the many-to-one screen projection solution provided by the above embodiment.
  • the large-screen device A in region A For example, the document 1 displayed on the mobile phone 1 and the document 2 displayed on the mobile phone 2 can be projected on the large-screen device A by dragging and dropping across devices or by wirelessly projecting the screen.
  • the mobile phone 1 can send the screen projection data A1 to the large-screen device A through the near-field data channel established with the large-screen device A, so that the large-screen device A can display the document 1, so as to realize the document displayed on the mobile phone 1 1 Display on large screen device A.
  • the mobile phone 2 sends the screen projection data A2 to the large-screen device A through the near-field data channel established with the large-screen device A, which is used to display the document 2 on the large-screen device A, so that the document 2 displayed on the mobile phone 2 can be displayed on the large-screen device A.
  • Display on A that is, with reference to FIG. 21, as shown in FIG. 22, the large-screen device A can, according to the received screen projection data A1, the screen projection data A2, the video data from the large-screen device B, and the video data collected by the large-screen device A itself.
  • the site image of region B, the site image of region A, document 1 projected by mobile phone 1 and document 2 projected by mobile phone 2 are displayed on the screen of large-screen device A.
  • the local conference site image that is, the conference site image in the region A may not be displayed.
  • the large-screen device A and the large-screen device B will respectively capture the local site images in real time, and send the corresponding video data to the opposite-end large-screen device.
  • the large-screen device A receives the screen projections of the mobile phone 1 and the mobile phone 2, that is, after receiving the above-mentioned screen projection data A1 and A2
  • the large-screen device A not only needs to send the video data collected in real time to the large-screen device B , and the projection data A1 and A2 can also be sent to the large-screen device B through the far-field data channel with the large-screen device B, so that the large-screen device B can also display documents 1 and 1 on its screen.
  • Document 2
  • the large-screen device B can display the site image of the region A on the screen of the large-screen device B according to the projection data A1, the projection data A2 and the video data from the large-screen device A, Document 1 and Document 2.
  • the large-screen device B can also display the local conference site picture, that is, the conference site picture of the region B, according to the video data collected by itself.
  • participant in region B can also use one or more documents displayed on the second terminals, such as mobile phone 3 and mobile phone 4 (for example, document 3 and document 4, respectively) using the many-to-one provided by the above embodiment.
  • the screen projection solution is projected to the large-screen device B in region B.
  • the large-screen device A and the large-screen device B can respectively display the corresponding conference site screen and the documents of the two regions.
  • the screen projection data used by the mobile phone 3 to realize screen projection is called screen projection data B1
  • the screen projection data used by the mobile phone 4 to realize screen projection is called screen projection data B2 as an example.
  • the large-screen device A can, according to the projection data A1 from the mobile phone 1, the projection data A2 from the mobile phone 2, the video data from the large-screen device B, the projection data B1 and the projection screen Data B2, and the video data collected by the large-screen device A itself, display on the screen of the large-screen device A the site picture of region B, the site picture of region A, document 1 projected by mobile phone 1, and document 2 projected by mobile phone 2, Document 3 cast by phone 3 and document 4 cast by phone 4.
  • the large-screen device B can use the projection data B1 from the mobile phone 3, the projection data B2 from the mobile phone 4, the video data from the large-screen device A, the projection data A1 and the projection data.
  • A2 On the screen of the large-screen device B, the screen of the conference site in region 1, document 1 projected by mobile phone 1, document 2 projected by mobile phone 2, document 3 projected by mobile phone 3, and document 4 projected by mobile phone 4 are displayed.
  • the screen of the large-screen device can be used to display the video call screen.
  • the above-mentioned area of the conference site screen is called the video call area, and will be used to display the screen projection interface, as described above.
  • the area of the document is called the document presentation area, as shown in FIG. 23 .
  • the layout of the video call area and the document presentation area on the screen of the large-screen device may be predefined.
  • the predefined layout is not limited to the horizontal layout shown in FIG. 23 , but can also be arranged vertically, in a picture-in-picture manner, and the like.
  • the large-screen device When the large-screen device currently only displays the video call screen, if the screen projection data from the mobile phone is received, the large-screen device can divide the screen into two areas: the video call area and the document display area according to the predefined layout. , which are used to display the video call screen and the corresponding screen projection interface respectively.
  • the predefined layout is horizontal layout
  • mobile phone 1 projects document 1 to large-screen device A as an example.
  • the large-screen device A currently displays the video call screen, including the conference site screen in region B and the conference site screen in region A.
  • the user can trigger cross-device screen projection by dragging.
  • the screen casting request that can be received by the large-screen device A.
  • the large-screen device A can display a request notification 2401 for asking the user's mobile phone 1 to request screen projection and whether to allow it.
  • permission for example, the permission button 2402 is selected
  • the large-screen device A can vertically divide the screen into a video call area and a document display area according to a predefined layout. Two areas are displayed, and the animation effect added by the document 1 projected by the mobile phone is presented. For example, the video call screen is retracted to the left area of the screen, and the document 1 is displayed to the right area of the screen. After that, as shown in (c) of FIG. 24 , the large-screen device A can display the video call screen and the document 1 at the same time.
  • the user may also use the input device of the large-screen device to control the content presented on the screen.
  • the user may use the remote controller of the large-screen device to switch the layout.
  • the large-screen device being the large-screen device A as an example.
  • the large-screen device A can correspond to each A full-screen button is displayed in the window where the screen is presented.
  • a full-screen button 2501 is displayed in the window corresponding to the conference site screen of region B
  • a full-screen button 2503 is displayed in the window corresponding to the conference site screen in region B
  • a full-screen button 2502 is displayed in the window corresponding to the screen of document 1.
  • the large-screen device A can display the picture of the corresponding window in full screen, and hide the pictures of other windows.
  • the user can use the remote control of the large-screen device A to switch the focus position of the remote control operation on the screen, for example, the focus position of the remote control operation is switched to the full screen button 2502, which is the same as the full screen button 2502.
  • the large-screen device A receives the user's operation of the determination button of the remote control. In response to this operation, as shown in (b) of FIG. 25 , the large-screen device A displays the document 1 in full screen. The site screen of region B and the site screen of region A can be hidden.
  • the large-screen device A displays a screen in full screen, such as the above-mentioned screen of document 1, the large-screen device A can also display a zoom-out button, as shown in (b) in FIG. After receiving the operation of the zoom-out button 2504 by the user, the large-screen device A can present all the pictures on the screen at the same time, as shown in (a) of FIG. 25 .
  • the large-screen device may not display full-screen buttons corresponding to different pictures.
  • a large-screen device such as the large-screen device A
  • displays multiple pictures the window of one of the pictures can be set as the focus window by default.
  • the user can use the direction keys of the remote control of the large-screen device A to switch the focus window.
  • the large-screen device A receives the user's operation of the confirmation button on the remote control, and the large-screen device A presents the screen of the focus window in full screen.
  • the large-screen device A receives the user's operation of the confirmation button or the return button on the remote control, it exits the full screen and presents all the pictures on the screen at the same time.
  • the above example is described by showing only the pictures in the document display area as an example, the user can also notify to perform the above corresponding operations, and only display the pictures in the video call area, which will not be repeated in this embodiment.
  • the content projected by the projection source can be as follows:
  • Solution 1 Support many-to-one coexistence sharing in the document display area.
  • large-screen device A namely mobile phone 1 and mobile phone 2
  • large-screen device B namely mobile phone 3 and mobile phone 4
  • large-screen device A and large-screen device B can display document 1 projected by mobile phone 1, document 2 projected by mobile phone 2, document 3 projected by mobile phone 3, and document 4 projected by mobile phone 4 at the same time.
  • document 1, document 2, document 3 and document 4 are displayed in the document display area in the form of a square grid.
  • the document display area is divided into four document display sub-areas, namely document display sub-area 1 , document display sub-area 2 , document display sub-area 3 and document display sub-area 4 .
  • the large-screen device A and the large-screen device B display the documents in the corresponding document display sub-areas in the order in which the corresponding screen projection data is received, respectively.
  • the sequence of screen projection data is: the screen projection data of mobile phone 1, the screen projection data of mobile phone 2, the screen projection data of mobile phone 3, and finally the screen projection data of mobile phone 4.
  • the large-screen device A and the large-screen device B sequentially display document 1, document 2, document 3, and document 4 in the corresponding document display sub-area 1, document display sub-area 2, document display sub-area 3, and document display sub-area 4.
  • Option 2 Support preemptive sharing in the document display area. That is, there is only one document display area on a large-screen device.
  • the document projected on the latter screen can cover the document projected on the previous screen.
  • mobile phone 1 is first connected to large-screen device A, and projects document 1, that is, large-screen device A and large-screen device B receive the screen projection data of mobile phone 1 first, then the large-screen device A and the large-screen device B display document 1 in their document display area.
  • the mobile phone 2 is connected to the large-screen device A and projects document 2, that is, the large-screen device A and the large-screen device B receive the screen projection data of the mobile phone 2, then the large-screen device A and the large-screen device B are in their document display area.
  • Document 1 is not displayed, Document 2 is displayed.
  • the mobile phone 3 is connected to the large-screen device B, and the document 3 is projected, that is, the large-screen device B and the large-screen device A receive the screen projection data of the mobile phone 3, and the large-screen device A and the large-screen device B are in their document display area.
  • Document 2 is not displayed, Document 3 is displayed.
  • the mobile phone 4 is connected to the large-screen device B, and the document 4 is projected, that is, the large-screen device B and the large-screen device A receive the screen projection data of the mobile phone 4, and the large-screen device A and the large-screen device B are in their document display area.
  • Document 3 is not displayed, Document 4 is displayed.
  • Scheme 3 The above scheme 1 and scheme 2 can also be combined.
  • a large-screen device supports up to four screen projection sources to present content on the screen at the same time.
  • the large screen can be displayed according to the result shown in (a) in Figure 26.
  • the content of each projection source is displayed on the device.
  • preemptive sharing can be used to present the projected content. For example, in conjunction with (a) in FIG.
  • the large-screen device currently presents the content projected by mobile phone 1, mobile phone 2, mobile phone 3 and mobile phone 4, if mobile phone 5 needs to perform screen projection, then the content projected by mobile phone 5 Content, such as document 5, can be presented on the large-screen device overlying the document 1 projected by the mobile phone 1. After that, if the mobile phone 6 needs to perform screen projection, the content projected by the mobile phone 6, such as the document 6, can cover the document 2 projected by the mobile phone 2 and be presented on the large-screen device, and so on.
  • FIG. 27 is a schematic diagram of the composition of a screen projection device according to an embodiment of the present application.
  • the apparatus can be applied to a first terminal, and the first terminal is connected with a plurality of second terminals.
  • the apparatus may include: a receiving unit 2701 and a display unit 2702 .
  • the receiving unit 2701 is configured to receive data from each of the plurality of second terminals.
  • the display unit 2702 is configured to display a plurality of first interfaces on the first terminal according to data received from a plurality of second terminals, and the plurality of first interfaces are in one-to-one correspondence with the plurality of second terminals;
  • the content is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal.
  • the apparatus may further include: a creating unit 2703 .
  • the creating unit 2703 is configured to create multiple drawing components, the multiple drawing components are in one-to-one correspondence with the multiple second terminals, and the drawing components are views or canvases.
  • the display unit 2702 displays a plurality of first interfaces on the first terminal according to the data received from the plurality of second terminals, which may include: the display unit 2702, according to the data received from the plurality of second terminals, displays on the plurality of drawing components respectively A first interface corresponding to the second terminal is drawn to display a plurality of first interfaces on the first terminal.
  • the apparatus may further include: a configuration unit 2704 and a decoding unit 2705 .
  • the configuration unit 2704 is configured to configure a plurality of decoding parameters, and the plurality of decoding parameters are in one-to-one correspondence with the plurality of second terminals.
  • the decoding unit 2705 is configured to decode the data received from the corresponding second terminal according to the plurality of decoding parameters.
  • the apparatus may further include: an obtaining unit 2706 .
  • the obtaining unit 2706 is configured to obtain connection information of multiple second terminals, and the connection information is used for establishing a connection between the first terminal and the corresponding second terminal; wherein, multiple drawing components correspond to multiple second terminals one-to-one, including: multiple Each drawing component is in one-to-one correspondence with connection information of multiple second terminals; and multiple decoding parameters are in one-to-one correspondence with multiple second terminals, including: multiple decoding parameters are in one-to-one correspondence with connection information of multiple second terminals.
  • the apparatus may further include: an input unit 2707 .
  • the input unit 2707 is configured to receive a user's first operation on the window of the first interface.
  • the display unit 2702 is further configured to reduce, enlarge or close the window, or switch the focus window in response to the first operation.
  • the input unit 2702 is further configured to receive a second operation of the user on the first interface corresponding to the second terminal.
  • the apparatus may further include: a sending unit 2708, configured to send the data of the second operation to the second terminal, so that the second terminal can display the third interface according to the second operation.
  • a sending unit 2708 configured to send the data of the second operation to the second terminal, so that the second terminal can display the third interface according to the second operation.
  • the receiving unit 2701 is further configured to receive updated data from the second terminal.
  • the display unit 2702 is further configured to update the first interface corresponding to the second terminal to a fourth interface according to the updated data, and the content of the fourth interface is a mirror image of the content of the third interface, or the content of the fourth interface and the third interface part of the same.
  • the first terminal also establishes a connection with the third terminal; the sending unit 2708 is further configured to send data received from multiple second terminals to the third terminal, so that the third terminal displays multiple first interfaces.
  • the receiving unit 2701 is further configured to receive video data from a third terminal.
  • the display unit 2702 is further configured to display a video call picture on the first terminal according to video data of the third terminal while the first terminal displays a plurality of first interfaces.
  • the apparatus may further include: a collection unit for collecting video data.
  • the sending unit 2708 is further configured to send video data to the third terminal, for the third terminal to display a video call screen while displaying a plurality of first interfaces on the third terminal.
  • FIG. 28 is a schematic diagram of the composition of another screen projection device according to an embodiment of the present application.
  • the apparatus can be applied to a second terminal, and the second terminal is connected to the first terminal.
  • the apparatus may include: a display unit 2801 , an input unit 2802 and a sending unit 2803 .
  • the display unit 2801 is used to display the second interface.
  • the input unit 2802 is used for receiving user operations.
  • the sending unit 2803 is used to send the data of the second interface to the first terminal in response to the user operation, so that the first terminal can display the first interface corresponding to the second terminal, and the first terminal also displays data related to other second terminals.
  • the corresponding first interface wherein the content of the first interface is a mirror image of the content of the second interface displayed by the corresponding second terminal, or the content of the first interface is the same as part of the content of the second interface displayed by the corresponding second terminal.
  • the apparatus may further include: an acquiring unit 2804, configured to acquire data of the second interface.
  • the data of the second interface is the screen recording data of the second interface; when the content of the first interface is the same as part of the content of the second interface , the data of the second interface is the screen recording data of the layer where the predetermined element in the second interface is located.
  • the display unit 2801 is further configured to display a configuration interface, where the configuration interface includes layer filter setting options.
  • the input unit 2802 is further configured to receive a user's selection operation on a layer filter setting option.
  • the input unit 2802 receives a user operation, which may include: the input unit 2802 receives a user's drag operation on the second interface or elements in the second interface.
  • the apparatus may further include: a determining unit 2805, configured to determine that the user's drag intention is to drag across devices; an acquiring unit 2804, further configured to acquire data of the second interface.
  • the element in the case of receiving a user's drag operation on an element in the second interface, the element can be a video component, a floating window, a picture-in-picture or a free small window, and the data of the second interface is the layer where the element is located.
  • Screen recording data; or, this element can be a user interface UI control in the second interface, the data of the second interface is the instruction stream of the second interface and the identifier of the UI control, or the data of the second interface is the drawing instruction of the UI control and logo.
  • the above embodiments describe the process of screen projection from multiple terminals to one terminal.
  • a terminal such as a mobile phone
  • the terminal serving as the screen projection source end can realize the projection display of the content of one or more applications of the terminal on other terminals serving as the screen projection destination end by creating multiple media streams, To meet the needs of multi-task parallelism.
  • the first terminal 101 is used as the screen projection source end
  • the second terminal 102 is used as the screen projection destination end as an example.
  • the first terminal 101 may be a mobile device such as a mobile phone and a tablet
  • the second terminal 102 may be a large-screen device such as a PC and a TV.
  • FIG. 29 is a schematic diagram of the composition of another software architecture provided by an embodiment of the present application.
  • the software architectures of both the first terminal 101 and the second terminal 102 may include: an application layer and a framework layer.
  • the first terminal 101 may include: a service scheduling and policy selection module, a video collection module, an audio collection module, a privacy mode setting module, an audio and video Encoding module, multi-device connection management protocol adaptation module and media stream transmission module.
  • Each module included in the first terminal 101 may be included in any layer of the software architecture of the first terminal 101 .
  • the above-mentioned modules included in the first terminal 101 are all included in the framework layer of the first terminal 101 , which is not specifically limited in this embodiment.
  • the first terminal 101 may also include an application program, which may be included in the above-mentioned application layer.
  • the second terminal 102 may include: a video rendering module, an audio rendering module, a video cropping module, an audio and video decoding module, a multi-device connection management protocol adaptation module and a media stream transmission module .
  • Each module included in the second terminal 102 may be included in any layer of the software architecture of the second terminal 102 .
  • each module included in the second terminal 102 is included in the framework layer of the second terminal 102, which is not specifically limited in this embodiment.
  • the second terminal 102 may also include an application program, which may be included in the above-mentioned application layer.
  • the first terminal 101 and the second terminal 102 may establish a connection in a wireless or wired manner.
  • the first terminal 101 and the second terminal 102 can discover each other through a discovery process, and establish a connection through a connection process, or form a network.
  • a transmission channel may be provided between the first terminal 101 and the second terminal 102 for data transmission between the two, so as to realize the content of one or more applications in the first terminal 101 to the display screen of the second terminal 102 display on.
  • the composition of the software architecture illustrated in this embodiment does not constitute a specific limitation on the composition of the terminal software architecture.
  • the terminals may include more or less modules than those shown in the figure, or combine some modules, or split some modules, or different module layout.
  • the above-mentioned first terminal 101 may not include a privacy mode setting module.
  • the above-mentioned first terminal 101 does not include an audio collection module, and the second terminal 102 does not include an audio rendering module.
  • the above-mentioned first terminal 101 does not include a video acquisition module
  • the second terminal 102 does not include a video rendering module and a video cropping module.
  • the above-mentioned second terminal 102 does not include a video cropping module.
  • the first terminal 101 serving as the screen projection source can project the content of one or more of its applications to the second screen projection destination by creating multiple media streams. displayed on the display screen of the terminal 102 .
  • the first terminal 101 serving as the screen projection source is a mobile phone
  • the second terminal 102 serving as the screen projection destination is a TV as an example.
  • the mobile phone's video capture module and audio capture module can be selected according to service scheduling and policies. Customized media strategy for audio extraction and video extraction to obtain audio data and video data.
  • the video acquisition module and the audio acquisition module of the mobile phone can transmit the collected audio data and video data to the audio and video coding module of the mobile phone.
  • the audio and video encoding module of the mobile phone can encode the audio data and video data respectively, unpack them and store them in the cache queue.
  • the multi-device connection management protocol adaptation module of the mobile phone can start network monitoring and connection management.
  • the mobile phone can establish a connection with the TV to establish a connection channel between the mobile phone and the TV.
  • the media stream transmission module of the mobile phone can take out the buffered audio data and video data from the buffer queue, and transmit them to the TV through the connection channel between the mobile phone and the TV, such as to the media stream transmission module of the TV.
  • the media stream transmission module of the TV receives the data, it will be packaged and decoded by the audio and video decoding module of the TV to obtain audio data and video data.
  • the audio and video decoding module of the TV transmits the audio data to the audio rendering module of the TV, and the audio rendering module outputs the corresponding audio.
  • the audio and video decoding module of the TV transmits the video data to the video rendering module of the TV, and the video rendering module outputs the corresponding video, that is, the corresponding interface content is displayed.
  • the process of audio and video extraction, encoding, unpacking and caching performed by the mobile phone can be called creating a media stream.
  • the mobile phone can complete the projection of the content of an application on the mobile phone to the TV by creating a media stream (for example, called the first media stream).
  • the mobile phone can also create another one or more media streams (such as the second media stream, the third media stream, etc.) to realize the projection of the content applied on the mobile phone to the TV or other screen projection destination.
  • the other media streams created such as the second media stream, the third media stream, etc., may be media streams created for the content of the application or media streams created for the content of other applications.
  • Scenario 1 Mobile phone A does not support parallel multitasking.
  • the user wants to view the content of APP1 and the content of APP2 of mobile phone A at the same time.
  • APP1 may be the first application in this embodiment of the present application.
  • APP2 may be the second application in this embodiment of the present application.
  • APP1 is a video application
  • APP2 is a fitness application.
  • mobile phone A (mobile phone A may be the above-mentioned first terminal) can be used as the source terminal of screen projection, and the contents of the two applications are projected to one or more other terminals that are the target terminal of screen projection, so as to satisfy the user’s requirement at the same time Check out the demand for video app and fitness app content.
  • the screen projection destination including a terminal, such as a TV (the TV may be the above-mentioned second terminal) as an example.
  • the user can trigger the mobile phone A to create two media streams by dragging, so as to project the content of the video application and the content of the fitness application on the mobile phone A to the TV.
  • Mobile phone A establishes a connection with the TV.
  • the description of establishing a connection between the mobile phone A and the TV is similar to the description of the corresponding content in the above-mentioned embodiment S401 shown in FIG. 4 , and details are not repeated here.
  • mobile phone A When mobile phone A is connected to the TV, mobile phone A can act as the screen projection source to project the content of the application to the TV that is the screen projection destination.
  • the specific description that the mobile phone A projects the content of the application to the TV is similar to the description that the mobile phone 1 or the mobile phone 2 projects the content to the TV in the foregoing embodiment, and will not be repeated here.
  • the user triggers the mobile phone A to start projecting the content of the application to the TV by dragging as an example for description.
  • the content of the application may include the interface content of the application displayed by the mobile phone A.
  • mobile phone A currently displays an interface of a video application.
  • the user may perform a drag and drop operation on the interface of the video application displayed on the mobile phone A or an element in the interface.
  • Mobile phone A can receive the drag operation.
  • the dragging operation may be the first operation in this embodiment of the present application. It can be understood that dragging can be divided into intra-device dragging and cross-device dragging (or inter-device dragging).
  • In-device drag may refer to a drag where the intent of the drag is to drag the object being dragged from one location on the device to another location on the device.
  • a cross-device drag may refer to a drag whose intent is to drag a dragged object from one location on the device into another device.
  • the mobile phone A may determine whether the user's drag intention is to drag across devices. If it is determined that the dragging intention of the user is to drag across devices, the content of the video application, such as the projection of the interface content of the video application to the TV, is started. As an example, the mobile phone A may perform video extraction on the interface of the currently displayed video application to obtain corresponding video data, and send the video data to the TV serving as the screen projection destination.
  • the video data can be used to project and display the interface of the video application or the elements in the interface on the destination end of the projection screen.
  • the video data may be data of the interface of the first application in this embodiment of the present application.
  • the object dragged by the user may be the interface of the video application, or may be an element in the interface of the video application, such as a video element, a picture-in-picture, or a floating window.
  • the mobile phone A When the object dragged by the user is the interface of the video application displayed by the mobile phone A, referring to FIG. 29 , the mobile phone A performs video extraction to obtain the corresponding video data.
  • Mobile phone A creates a virtual display (VirtualDisplay).
  • the video capture module of mobile phone A sends a request to create a VirtualDisplay to the display manager of mobile phone A.
  • the display manager of mobile phone A After the display manager of mobile phone A completes the creation of the VirtualDisplay, it can return the created VirtualDisplay to the video capture module of mobile phone A.
  • the mobile phone A can start the video application into the VirtualDisplay, or in other words, move the interface drawing of the video application to the VirtualDisplay.
  • mobile phone A can also bind VirtualDisplay to the video capture module of mobile phone A for screen recording, or video extraction. In this way, the video acquisition module of the mobile phone A can obtain corresponding video data.
  • the mobile phone A may only project the element to the screen projection destination.
  • mobile phone A performs video extraction, and the process of obtaining video data may be: after determining that the user's drag intention is to drag across devices, mobile phone A creates a VirtualDisplay. After that, the mobile phone A can move the drawing of the element dragged by the user in the interface of the video application to the VirtualDisplay.
  • Mobile phone A can also bind VirtualDisplay to the video capture module of mobile phone A for screen recording, or video extraction. In this way, the video acquisition module of the mobile phone A can obtain corresponding video data.
  • the specific implementation of the mobile phone A moving the drawing of the element dragged by the user in the application interface to the VirtualDisplay may be: after receiving the user's drag operation on the element in the interface of the video application, the mobile phone A can obtain the current value of the element in the video application interface.
  • Mobile phone A can synthesize the interface of the video application into the VirtualDisplay layer by layer. In the process of layer-by-layer synthesis, mobile phone A can determine whether the layer name of the layer to be synthesized currently is the same as the layer name of the layer where the dragged element is located. If it is the same, Phone A composites the layer into the VirtualDisplay. If not, Phone A does not composite the layer into VirtualDisplay.
  • the mobile phone A can also only project specific elements in the interface, such as video elements, to the destination end of the projection screen, so as to protect the privacy of the user.
  • mobile phone A may provide a setting interface for the user to enable or disable this function, such as a so-called privacy mode.
  • the mobile phone A When the user chooses to turn on the privacy mode, the mobile phone A only synthesizes the layer where the specific element in the interface is located into the VirtualDisplay to obtain the video data.
  • the user chooses to turn off the privacy mode, mobile phone A can synthesize all layers of the interface into VirtualDisplay to obtain video data.
  • the video data can be encoded and sent to the TV serving as the screen projection destination.
  • the acquired video data can be transmitted to the audio and video encoding module of mobile phone A.
  • the audio and video encoding module of mobile phone A can encode the video data, unpack it, and store it in the cache queue.
  • the mobile phone A can send the video data in the buffer queue to the TV.
  • the media stream transmission module of the mobile phone A can take out the buffered video data from the buffer queue and transmit it to the TV through the connection channel between the mobile phone A and the TV, such as to the media stream transmission module of the TV.
  • the TV can display an interface or elements in the interface corresponding to the video application on the TV according to the video data.
  • the data is packaged and decoded by the audio and video decoding module of the TV, and the corresponding video data can be obtained.
  • the audio and video decoding module of the TV transmits the video data to the video rendering module of the TV, and the video rendering module displays the corresponding interface content.
  • the interface of the video application in the mobile phone A or the elements in the interface can be projected and displayed on the TV, or the "transfer" of the video application from the mobile phone A to the TV can be realized.
  • the user can continue to view the content of the video application on the TV.
  • the object dragged by the user is an element in the interface of the video application, such as a video element.
  • the user performs a drag operation on the video element 1201, such as an operation of long pressing and moving a finger to the right.
  • the phone can draw and display an animation of the video element 1201 as the user's finger moves.
  • the mobile phone A creates a virtual display, such as virtual display 1 (the virtual display 1 may be an embodiment of the present application) The first virtual display in the current interface), and draw the layer where the video element 1201 (the video element 1201 can be the first element in the embodiment of the application) in the current interface is located on the virtual display 1, so that the mobile phone A can perform video Extraction to obtain video data, such as video data a (the video data a may be the data of the interface of the first application in the embodiment of the application).
  • the mobile phone A can encode and unpack the video data a and store it in the cache queue.
  • the mobile phone A can send the video data a in the buffer queue to the TV.
  • the video data a is packaged and decoded, and then rendered, so as to display the video X played in the video element 1201 on the TV.
  • the "transfer" of the video application from the mobile phone A to the TV is realized, and the user can continue to watch the video X on the TV.
  • the TV may display the dragged object on the TV after the user releases the drag of the object on the mobile phone A.
  • the mobile phone A sends the video data a in the cache queue to the TV.
  • the mobile phone A sends the video data a to the TV after receiving the user's release of dragging and dropping the video element 501 for the TV.
  • a visual effect of dragging the object from mobile phone A to the TV is provided to the user. During the dragging process of the object, if part of the object is If the area overflows the display, the object can be displayed on both phone A and the TV.
  • mobile phone A can perform video extraction to obtain video data a, encode and unpack the video data a and send it to the TV .
  • the mobile phone A can also send the rectangle information of the video element 1201 and the coordinate information of a certain corner (eg, upper left corner) of the video element 1201 to the TV during the dragging process.
  • the received rectangle information of the video element 1201 the coordinate information of the upper left corner of the video element 1201 and the resolution of the mobile phone A during the dragging process, after determining that the area where the video element 1201 exists overflows the display screen of the mobile phone A, according to the video
  • the rectangle information of the element 1201, the coordinate information of the upper left corner of the video element 1201 during the dragging process, and the resolution of the mobile phone A determine the information corresponding to the area of the video element 1201 that can be displayed on the TV screen.
  • the TV packages and decodes the video data a, and performs interface rendering according to the information of the determined area and the decoded packaged video data a, so as to realize the drawing of the video X played in the video element 1201 on the TV.
  • an interface 1 is displayed on the display screen of the TV, and the content of the interface 1 is the same as the content of the video X carried in the video element 1201 of the mobile phone A overflowing the display screen of the mobile phone.
  • the mobile phone A can acquire the video data a and the coordinate information of the upper left corner of the video element 1201 in real time during the dragging process, and send it to the TV.
  • the TV can update the interface 1 in real time according to the received data.
  • the TV can display the interface 1 in full screen on the display screen of the TV according to the video data a received in real time.
  • the content in the interface 1 is the same as the entire content of the video X carried in the video element 1201 .
  • the interface 1 may be the first interface in this embodiment of the application.
  • the above-mentioned process of extracting, encoding, unpacking, and buffering the content of the application may be referred to as creating a media stream. That is, in combination with the above example, when the content of the video application includes the interface content, the mobile phone A can create a virtual display (such as virtual display 1), and use the virtual display 1 to realize one media stream (such as the first media stream). flow) creation. Afterwards, the mobile phone A can realize the projection of the interface content of the video application to the TV by sending the created data corresponding to the first media stream, such as the above-mentioned video data a, or the first video data to the TV.
  • a virtual display such as virtual display 1
  • one media stream such as the first media stream
  • flow the mobile phone A can realize the projection of the interface content of the video application to the TV by sending the created data corresponding to the first media stream, such as the above-mentioned video data a, or the first video data to the TV.
  • the mobile phone A can realize the projection of the content of other applications on the mobile phone A to the TV by creating another channel or multiple media streams.
  • mobile phone A can create another media stream for the fitness application, such as the second media stream, so as to realize the content of the fitness application, such as interface content to TV projection.
  • the process of creating a media stream for a fitness application to project the content of the fitness application to the TV is similar to the above-mentioned process of creating a media stream for a video application to project the content of the video application to the TV, and will not be described in detail here.
  • the user after projecting the content of the video application in the mobile phone A to the TV, as shown in FIG. 14 , the user opens the fitness application of the mobile phone A (the operation of opening the fitness application may be the second operation in the embodiment of the application) to view fitness video.
  • Mobile phone A receives the user's drag operation on the video element (the video element may be the second element in the embodiment of the application) that carries the fitness video (the drag operation may be the third operation in the embodiment of the application) .
  • the mobile phone A makes the video element move on the display screen of the mobile phone A following the movement of the user's finger, giving the user a visual effect that the video element is dragged by the user's finger.
  • the mobile phone A can determine whether the user's drag intention is to drag across devices.
  • mobile phone A may create another virtual display, such as called virtual display 2 (this virtual display 2 may be the second virtual display in the embodiment of the application), And draw the layer where the video element is located in the current interface on the virtual display 2, so that the mobile phone A performs video extraction to obtain video data, such as video data b (the video data b can be the data of the interface of the second application).
  • the mobile phone A can encode the video data b, unpack it, and store it in the cache queue. After that, the mobile phone A can send the video data b in the buffer queue to the TV.
  • the mobile phone A may also send the rectangle information of the video element and the coordinate information of a certain corner (eg, upper left corner) of the video element to the TV during the dragging process.
  • the TV can receive the video data b, the rectangle information of the video element, and the coordinate information of the upper left corner of the video element during the dragging process.
  • the TV can The rectangle information of the video element, the coordinate information of the upper left corner of the video element during the dragging process and the resolution of mobile phone A determine the information corresponding to the area of the video element that can be displayed on the display screen of the TV.
  • the TV packs and decodes the video data b, it renders the interface according to the information of the determined area and the video data b after decoding the pack, so as to realize the drawing of interface 2, and the content in the interface 2 is the same as that of the fitness application in the mobile phone A.
  • the TV can simultaneously display the content of the video application of the mobile phone A and the content of the fitness application on the TV display screen.
  • the TV currently displays the content of the video application in full screen (such as the above interface 1).
  • the TV can display the above-mentioned in the form of a small window (or picture-in-picture, floating window) on the display screen of the TV. interface 2.
  • the mobile phone A can obtain the video data b and the coordinate information of the upper left corner of the video element during the dragging process in real time, and send it to the TV.
  • the TV can update the interface 2 in real time according to the received data. After the user releases the drag, as shown in (d) of FIG.
  • the TV can continue to display the interface 2 in the form of a small window on the display screen of the TV according to the video data b received in real time.
  • the interface 2 The content is the same as the entire content of the fitness app's fitness video. It can be obtained from the above description that the mobile phone A creates a virtual display 2 and uses the virtual display 2 to implement another media stream, such as the creation of a second media stream. By sending the data corresponding to the created second media stream, such as the above-mentioned video data b, or the second video data, to the TV, the mobile phone A realizes the projection of the content of the fitness application to the TV.
  • the interface 2 may be the second interface in this embodiment of the application.
  • the interface including the content of the interface 2 and the content of the interface 1 may be the third interface in this embodiment of the application.
  • the content of the video application and fitness application of mobile phone A such as the interface content
  • the content of the video application and fitness application of mobile phone A are projected on the TV as the screen projection destination, which satisfies the user's demand for viewing the content of the video application and fitness application at the same time.
  • the content of the above application may also include audio.
  • mobile phone A's application such as a video application
  • mobile phone A's music application to listen to music
  • mobile phone A can not only display the current display
  • the interface content of the application is projected to the screen-casting destination, and the audio can also be projected to the screen-casting destination.
  • the mobile phone A not only needs to send the above-mentioned video data (such as video data a or video data b) to the TV, but also needs to send audio data to the TV.
  • the video data is used for the TV to display the corresponding interface on the display screen of the TV
  • the audio data is used for the TV to play the corresponding sound.
  • audio data can be obtained by creating an audio recording (AudioRecord) object. That is to say, when the user triggers mobile phone A to start projecting the content of the application, if the content of the application includes interface content and audio, mobile phone A can create a virtual display and AudioRecord object, and use the virtual display and AudioRecord object to realize a channel of media After the creation of the stream, the corresponding video data and audio data are sent to the TV through the created media stream, so as to realize the projection of the application content, including the interface content and the audio to the TV.
  • AudioRecord audio recording
  • the mobile phone A may create multiple AudioRecord objects in advance, which are used for subsequent audio extraction of different media streams. For example, it can be used for subsequent audio extraction of different applications, that is, based on the created AudioRecord object, the audio data of the application that needs to be projected is redirected to the corresponding media stream, and other audio data is still output from the projection source.
  • the content of the video application includes interface content and audio.
  • Phone A pre-creates two AudioRecord objects and creates a cache. After the user triggers the content of the video application to start projecting, the mobile phone A can realize the projection of the content of the video application to the TV by creating the first media stream. The process of projecting the interface content of the video application to the TV is as described in the foregoing embodiment, and details are not repeated here.
  • the mobile phone A can also call the AudioRecord object to perform audio extraction to obtain audio data, such as audio data a, which is used to realize the projection of the audio of the video application to the TV.
  • audio data such as audio data a
  • the specific process of acquiring audio data a may include: mobile phone A, for example, the audio acquisition module of mobile phone A can call one of the two AudioRecord objects created in advance, such as called AudioRecord object 1 (AudioRecord object 1 may be the first AudioRecord object in this embodiment of the application). After the AudioRecord object 1 is called, the audio acquisition module of the mobile phone A can record the audio in the video played by the video application to obtain audio data, such as audio data a (the audio data a can be the same as in the embodiment of the application). audio data of the first application). After acquiring the audio data a, the audio acquisition module of the mobile phone A can transmit the acquired audio data a to the audio and video encoding module of the mobile phone A. The audio and video encoding module of the mobile phone A can encode the audio data a, unpack it and store it in the cache.
  • AudioRecord object 1 may be the first AudioRecord object in this embodiment of the application.
  • the media stream transmission module of the mobile phone A can obtain the audio data a from the buffer, and send it to the TV through the connection channel between the mobile phone A and the TV.
  • the television can output corresponding audio according to the audio data a.
  • the data is packaged and decoded by the audio and video decoding module of the TV, and the corresponding audio data a can be obtained.
  • the audio and video decoding module of the TV transmits the audio data a to the audio rendering module of the TV, and the audio rendering module outputs the corresponding audio. In this way, the audio of the video application in the mobile phone A is projected to the TV. So far, other audios of mobile phone A are still output through mobile phone A.
  • mobile phone A can create a second media stream to realize the projection of the content of the fitness application to the TV.
  • the process of projecting the interface content of the fitness application to the TV is as described in the foregoing embodiment, and details are not repeated here.
  • the mobile phone A can also call another AudioRecord object pre-created by the mobile phone A, such as called AudioRecord object 2 (AudioRecord object 2 can be the second AudioRecord object in the embodiment of the application), so as to realize the projection of the audio of the fitness application to the TV , the specific implementation process is similar to the projection of the audio of the video application to the TV, which is not repeated here.
  • the audio of the video application and fitness application of mobile phone A is output through the TV, and other audio is output through mobile phone A.
  • the TV can select one channel of audio output. For example, taking the TV displaying the interface content projected by different applications in the form of a large window (that is, a full-screen display window) and a small window, the TV can be configured not to output the audio of the small window, but to output the audio of the large window. For example, in conjunction with the example shown in (d) of FIG. 13 , the TV plays the sound of the video X and does not play the sound of the fitness video.
  • media policies may be configured for creating the above-described media streams.
  • the media policy may be pre-configured, or a configuration interface (eg, the configuration interface may be the interface shown in FIG. 8 ) may be provided for the user to set.
  • the media policies corresponding to different media streams may be the same or different.
  • the media strategy corresponding to one media stream may include: whether to distribute audio (or project audio), whether to distribute video (or project interface content), and parameters corresponding to virtual display when distributing video (such as: name, width, height, code rate, encoding format, dots per inch (dots per inch, DPI, etc.), specifications of the audio collected when distributing the audio, etc.
  • the mobile phone A can determine whether to project audio and whether to project video according to the corresponding media strategy, and collect video data and audio data of the specified specification according to the corresponding parameters.
  • the TV can set a window of one of the interfaces as the focus window by default, for example, the TV defaults a small window as the focus window.
  • the television displays a prompt sign 1301 for prompting the user of a small window, that is, the window of the interface 2 is the focus window.
  • the user can use the remote control of the TV to select and switch the focus window, switch the layout of the large and small windows, and also close the large and small windows.
  • the window used for full-screen display interface 1 may be referred to as a large window.
  • the TV when the TV receives the user's operation of the left button or the right button of the remote control, it switches the focus window.
  • the focus window is a small window
  • the TV receives the user's operation of the confirmation button on the remote control, as shown in (e) of Figure 13, the TV can display the small window, namely, interface 2 in full screen, and display the large window, namely, interface 2 in full screen. Interface 1 is displayed in the form of a small window.
  • the TV receives the user's operation of the return key on the remote control, the TV can stop displaying the small window, or close the small window, and the TV can also notify the mobile phone A to stop projecting the content of the application corresponding to the small window.
  • the mobile phone A can switch the application corresponding to the small window to the home screen to continue running. If the user continues to receive the user's operation of the return key on the remote control, the TV can stop displaying the large window, and the TV can also notify the mobile phone A to stop projecting the content of the application corresponding to the large window. In addition, mobile phone A can stop the application corresponding to the small window from running on the home screen, and start the application corresponding to the large window to run on the home screen.
  • the projection content corresponding to different media streams is displayed on the screen projection destination in the form of a large window and a small window, which is only an example.
  • the projection destination can also use other arrangements, such as vertical arrangement and horizontal arrangement, to display windows corresponding to different media streams.
  • the projection destination displays windows corresponding to different media streams.
  • the specific implementation of the window is not specifically limited.
  • the screen projection destination can also dynamically adjust the size and arrangement of the windows corresponding to each media stream displayed by the projection destination according to the number of projected media streams.
  • the number of projected media streams can be dynamically increased or decreased. When the number of projected media streams increases or decreases, the screen projection destination can adjust the size and arrangement of windows corresponding to each media stream according to the current number of projected media streams.
  • the above scenario 1 is described by taking the screen projection source end projecting the contents of multiple applications to the same screen projection destination as an example.
  • the screen projection source terminal may also project multiple applications thereof to different screen projection destinations.
  • scenario 2 the following description will be given in conjunction with scenario 2.
  • Mobile phone B does not support parallel multitasking.
  • the user wants to view the content of APP3 and the content of APP4 of mobile phone B at the same time.
  • APP3 is a fitness application
  • APP4 is an educational application.
  • APP3 may be the first application in the embodiment of the present application
  • APP4 may be the second application in the embodiment of the present application.
  • mobile phone B mobile phone B can be the above-mentioned first terminal
  • the contents of the two applications can be projected to one or more other terminals that are the destination of screen projection, so as to satisfy the user's ability to view fitness applications and Demand for educational app content.
  • the screen projection destination including two terminals, such as a TV and a tablet (the TV may be the second operation in the embodiment of the application, and the tablet may be the third terminal in the embodiment of the application).
  • Phone B can create two media streams to project the content of the fitness application to the TV and the content of the education application to the tablet.
  • the specific implementation is similar to the corresponding description in the above scenario 1, and will not be described in detail here. The difference is that the data corresponding to one of the media streams created by mobile phone B is transmitted to the TV to realize the fitness application.
  • the data corresponding to another media stream is transmitted to the tablet, which is used to realize the projection of the content of the educational application on the tablet.
  • the description continues by taking the user triggering the mobile phone B to start projecting the content of the application by dragging as an example. That is, in the cross-device dragging scenario, the user can trigger mobile phone B to create two media streams by dragging, so as to project the content of the fitness application on mobile phone B to the TV and the content of the education application to the tablet.
  • the content of fitness applications and the content of educational applications include interface content and audio.
  • mobile phone B is connected to both the tablet and the TV.
  • Phone B pre-creates two AudioRecord objects.
  • the user opens the fitness application of mobile phone B to view the fitness video.
  • the mobile phone B receives the user's drag operation on the video element carrying the fitness video (the video element may be the first element in the embodiment of the application).
  • the mobile phone B can determine whether the user's drag intention is to drag across devices.
  • the mobile phone B may create a virtual display, such as a virtual display A (the virtual display A may be the first virtual display in this embodiment of the application), and call the two pre-created virtual displays.
  • AudioRecord object A the AudioRecord object A may be the first AudioRecord object in this embodiment of the application.
  • virtual display A and AudioRecord object A and mobile phone B can realize the creation of one media stream to obtain corresponding video data and audio data, such as video data a' and audio data a' respectively.
  • the mobile phone B can send the video data a' and the audio data a' to the tablet or TV connected to the mobile phone B to realize the projection of the fitness application content to the screen projection destination.
  • the mobile phone B may use one terminal of the tablet and the TV as the destination terminal for screen projection. For example, after mobile phone B determines that the user's drag intention is to drag across devices, mobile phone B may display a device list, where the device list includes the device ID of the tablet and the device ID of the TV. The user can select the device ID in the device list, so that the mobile phone B can determine the projection destination for this projection. If the mobile phone B receives the user's selection operation on the device identification of the TV, indicating that the user wants to project the content of the fitness application to the TV, then according to the user's selection operation, the mobile phone B can transfer the above-mentioned video data a' and audio data a' send to TV.
  • mobile phone B may determine the screen projection destination of this screen projection according to the drag direction of the drag operation performed by the user and the direction of the terminal connected to mobile phone B relative to mobile phone B.
  • mobile phone B can obtain the direction of each terminal connected to mobile phone B relative to mobile phone B, and determine the terminal in the drag direction as this The projection destination of the secondary projection. For example, if the tablet is located in the direction pointing to the upper edge of the mobile phone, and the TV is located in the direction pointing to the right edge of the mobile phone, the dragging direction for the user to perform the drag operation is to drag to the right.
  • the mobile phone B can obtain the direction of the TV and tablet connected to the mobile phone B relative to the mobile phone B. According to the direction of the TV and tablet connected to the mobile phone B relative to the mobile phone B and the dragging direction, the mobile phone B can determine that the TV is located in the dragging direction, indicating that the user wants to project the content of the fitness application to the TV, then the mobile phone B can The video data a' and the audio data a' are sent to the television. Wherein, the direction of other terminals relative to the mobile phone B, the mobile phone B can be obtained by using positioning technologies such as Bluetooth, Ultra-wideband (Ultra-wideband, UWB), and ultrasound.
  • positioning technologies such as Bluetooth, Ultra-wideband (Ultra-wideband, UWB), and ultrasound.
  • the TV After the TV receives the video data a' and the audio data a' from the mobile phone B, it can package and decode, and then render the audio and video to display the fitness video on the TV, as shown in 3001 in Figure 30, and play The corresponding audio realizes the projection of the content of the fitness application of the mobile phone B to the TV.
  • the user After projecting the content of the fitness application in the mobile phone B to the TV, the user opens the educational application of the mobile phone B to view the educational video.
  • Mobile phone B receives the user's drag operation on the video element bearing the educational video.
  • the mobile phone B may create a virtual display after determining that the user's drag intention is to drag across devices, such as a virtual display B (the virtual display B may be the second one in this embodiment of the application). virtual display), and call the other of the two pre-created AudioRecord objects, such as AudioRecord object B (the AudioRecord object B may be the second AudioRecord object in this embodiment of the application).
  • mobile phone B can realize the creation of another media stream to obtain corresponding video data and audio data, such as video data b' and audio data b' respectively. Afterwards, the mobile phone B can send the video data b' and the audio data b' to the tablet or TV connected to the mobile phone B to realize the projection of the educational application content to the screen projection destination.
  • mobile phone B can determine the screen projection according to the user's selection operation, or according to the dragging direction of the dragging operation performed by the user and the direction of the terminal connected to mobile phone B relative to mobile phone B. Projection destination. For example, if the mobile phone B receives the user's operation of selecting the tablet, or determines that the tablet is in the dragging direction, indicating that the user wants to project the content of the educational application to the tablet, the mobile phone B can send the above-mentioned video data b' and audio data b' to television.
  • the tablet After the tablet receives the video data b' and audio data b' from the mobile phone B, it can be packaged, decoded, and then rendered audio and video to display the educational video on the tablet, as shown in 3002 in Figure 30, and play The corresponding audio realizes the projection of the content of the educational application of mobile phone B to the tablet.
  • the contents of the fitness application and education application of mobile phone B such as interface content and audio, are respectively projected to the TV and tablet as the screen projection destination, which satisfies the user's demand for viewing the content of the fitness application and the education application at the same time.
  • the mode in which the screen projection source end in scene 1 creates multiple media streams and sends them to the same screen projection destination end to realize application content projection is called aggregation mode
  • the screen projection source end in scene 2 creates multiple media streams.
  • the mode in which the media stream is sent to multiple different projection destinations to realize the projection of the application content is called the distribution mode.
  • the screen projection source terminal also supports projecting one media stream created by it to multiple screen projection destinations, and this mode may be called a broadcast mode.
  • the screen projection source terminal can simultaneously support the above three video distribution modes, that is, the screen projection source terminal has the ability to implement the above three video distribution modes.
  • the screen projection source supports the configuration of the three video distribution modes, for example, a setting interface can be provided for the user to set, or the system default configuration.
  • the configured video distribution mode can also be understood as the above-mentioned media strategy. That is to say, the screen projection source can obtain the relevant configuration of the video distribution mode from the media policy.
  • the screen projection source end has the ability to realize the above three video distribution modes. If the user sets the video distribution mode of the screen projection source end to the above aggregation mode, after the multi-channel media stream is created at the screen projection source end, the screen projection mode is set according to the settings.
  • the source end can project the multiple media streams to the same screen projection destination to meet the multitasking needs of users. Take the screen projection source creating two media streams as an example.
  • the screen projection source can obtain the video distribution mode as the aggregation mode according to the customized media policy in the service scheduling and policy selection module. According to the user's trigger, the screen projection source can collect the audio and video data of the first channel and the audio and video data of the second channel.
  • the screen projection source end separately encodes the audio and video data of the first channel of audio and video data and the audio and video data of the second channel to realize the creation of two channels of media streams.
  • the projection source can transmit to the same projection destination after being adapted by the multi-device connection management protocol.
  • the source device may assign different identifiers for different channels of audio and video data (for example, the identifier may be an identifier corresponding to the virtual display, such as the name of the virtual display, or the identifier may be an index allocated by the source device for different channels of media streams. ), so that the projection destination can be partitioned.
  • the destination end of the screen projection can distinguish the first channel of audio and video data and the second channel of audio and video data according to the difference in the identification of the received audio and video data.
  • the audio and video data are respectively rendered, so as to realize the projection of the application content corresponding to the two channels of media streams at the screen projection destination.
  • the projection source has the ability to implement the above three video distribution modes.
  • the system defaults the video distribution mode of the projection source to the above distribution mode.
  • the projection source end can project the multi-channel media stream to multiple different projection destinations to meet the multitasking needs of users.
  • the screen projection source creates two media streams.
  • the screen projection source can obtain the video distribution mode as the distribution mode according to the customized media policy in the service scheduling and policy selection module. According to the user's trigger, the screen projection source can collect the audio and video data of the first channel and the audio and video data of the second channel.
  • the screen projection source end separately encodes the audio and video data of the first channel of audio and video data and the audio and video data of the second channel to realize the creation of two channels of media streams.
  • the projection source can transmit to different projection destinations after being adapted by the multi-device connection management protocol.
  • the first channel of audio and video data is transmitted to the projection destination. 1.
  • the screen projection destination end 1 and the screen projection destination end 2 After receiving the corresponding audio and video data, the screen projection destination end 1 and the screen projection destination end 2 perform audio and video decoding on the received audio and video data respectively, and then perform audio and video rendering, so as to achieve the purpose of screen projection destination 1 and screen projection.
  • the two media streams on terminal 2 correspond to the projection of application content.
  • the screencasting source has the ability to realize the above three video distribution modes.
  • the system defaults the video distribution mode of the screencasting source to the above broadcast mode, then the screencasting source can create a media stream, and distribute this media stream according to the settings. Streams are projected to multiple different projection destinations.
  • the screen projection source can obtain the video distribution mode as the broadcast mode according to the customized media strategy in the service scheduling and strategy selection module.
  • the screen projection source can collect single-channel audio and video data according to the user's trigger.
  • the source end of the screen projection performs audio and video encoding on the audio and video data to realize the creation of a media stream.
  • the projection source can transmit to different projection destinations after being adapted by the multi-device connection management protocol.
  • the audio and video data of this channel can be transmitted to the projection destination 1 and the projection destination 2.
  • the projection destination 1 and the projection destination 2 respectively decode the received audio and video data, and then perform audio and video rendering, so as to realize the audio and video rendering on the projection destination 1 and the projection destination.
  • This media stream on the destination end 2 corresponds to the projection of the application content.
  • the screen projection source end when the screen projection source end has the ability to implement the above three video distribution modes, the screen projection source end can also determine the video distribution mode according to the number of devices connected to it. For example, if there is one device connected to the screen projection source, the screen projection source can determine that the video distribution mode is the aggregation mode. For the created multi-channel media stream, the screen projection source end can project the multi-channel media stream to the device, so as to realize the projection of different application contents in the screen projection source end on the same screen projection destination end, as shown in the above scenario 1. For another example, if there are multiple devices connected to the screen projection source, the screen projection source may determine that the video distribution mode is the distribution mode.
  • the screen projection source can project the multi-channel media streams to different devices, so as to realize the projection of different application contents in the screen projection source to different screen projection destinations, as shown in the above scenario 2.
  • the screencasting source can also drag and drop according to when the user performs a dragging operation for different applications.
  • the difference in direction determines the video distribution mode. For example, if the user drags and drops in different directions for different applications, the projection source can determine that the video distribution mode is the distribution mode. Therefore, for the created multi-channel media stream, the projection source can Cast media streams to different devices.
  • the projection source can determine that the video distribution mode is the aggregation mode. Therefore, for the created multi-channel media stream, the projection source can channel media streams to the same device.
  • a terminal serving as a screencasting source can create multiple media streams to realize the projection of the contents of multiple applications of the terminal to one or more screencasting destinations. It satisfies the requirement of multitasking and parallelism, which can improve the use efficiency of the terminal and improve the user experience.
  • screen recording and encoding of the content at the source end of the projection screen are stored in the local cache, so as to realize the display of the content at the source end of the projection screen at the destination end of the projection screen, Support mirror projection and heterogeneous projection.
  • Third-party applications can integrate the corresponding screen projection capabilities (such as, Provide dll library, Provide aar package), call the API interface of Multimedia Distribution Protocol (Distributed Multimedia Protocol, DMP) to realize screen projection, so that online video projection can be realized.
  • DMP Distributed Multimedia Protocol
  • Mirror projection means that the audio and video rendered by the destination end of the projection screen are exactly the same as the source end of the projection screen.
  • the picture, audio or video is opened on the source end of the projection screen, and the destination end also displays the picture and plays the audio or video;
  • An application such as application or window ( window) to the projection destination, which can achieve both sharing and privacy protection.
  • the display of multiple projection contents can be realized on the device.
  • the user datagram protocol (UDP) protocol and the forward error correction (FEC) protocol can be used to realize the transmission of the source end media stream to the screen projection destination end, which can effectively alleviate the Packet loss and congestion avoidance.
  • the invalidate reference frame (IFR) technology can be used to ensure fast recovery after packet loss and avoid blurry screens and long-term freezes.
  • An embodiment of the present application further provides a screen projection device, and the device can be applied to an electronic device, such as the first terminal or the second terminal in the foregoing embodiment.
  • the device may include: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to cause the screen projection device to implement the first terminal (such as a television set) or the first terminal in the above method embodiment.
  • Two functions or steps performed by a terminal such as a mobile phone).
  • An embodiment of the present application provides an electronic device (such as the above-mentioned first terminal or second terminal), the electronic device includes a display screen, one or more processors and a memory; the display screen, the processor and the memory are coupled; the memory is used for storing Computer program code, the computer program code includes computer instructions, when the computer instructions are executed by the electronic device, the electronic device is made to implement the first terminal (such as a TV set, a mobile phone A, and a mobile phone B) or a second terminal (such as Each function or step performed by mobile phone, TV, tablet).
  • the electronic device includes but is not limited to the above-mentioned display screen, memory and one or more processors.
  • the structure of the electronic device may refer to the structure of the mobile phone shown in FIG. 2 .
  • the chip system includes at least one processor 3401 and at least one interface circuit 3402 .
  • the processor 3401 may be the processor in the above-mentioned terminal.
  • the processor 3401 and the interface circuit 3402 may be interconnected by wires.
  • the processor 3401 may receive and execute computer instructions from the memory of the terminal through the interface circuit 3402.
  • the terminal such as the first terminal or the second terminal described above
  • the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
  • Embodiments of the present application further provide a computer-readable storage medium, which is used to store computer instructions run by the above-mentioned terminal (eg, the first terminal or the second terminal).
  • Embodiments of the present application further provide a computer program product, including computer instructions run by the above-mentioned terminal (eg, the first terminal or the second terminal).
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种投屏方法及设备,涉及电子设备领域,实现了多个设备的显示界面在同一个设备上的呈现,即实现了多对一的投屏。具体方案为:第一终端从多个第二终端中每个第二终端接收数据;根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,多个第一界面与多个第二终端一一对应;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。

Description

一种投屏方法及设备
本申请要求于2020年12月08日提交国家知识产权局、申请号为202011425441.8、申请名称为“一种投屏方法及设备”,及于2021年2月09日提交国家知识产权局、申请号为202110182037.0、申请名称为“一种投屏方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备领域,尤其涉及一种投屏方法及设备。
背景技术
为了提高办公效率,用户可将不同设备连接起来一起配合使用。如,一个设备的显示界面可投射到另一个设备的显示屏上供用户查看。目前,一个设备的显示界面能够在另一个设备上呈现,主要利用一对一的镜像投屏技术实现,即仅能实现一对一的投屏。
但是,在如开会、发布会演示等场景下,可能需要将多个设备的显示界面在同一个设备(如,大屏设备)上呈现供用户查看。
发明内容
本申请实施例提供一种投屏方法及设备,实现了多个设备的显示界面在同一个设备上的呈现,即实现了多对一的投屏。另外,投屏源端通过创建多路媒体流,并根据策略分布到一个或多个投屏目的端,实现了一个设备中的多个应用的内容到其他设备上的投射显示。
为达到上述目标,本申请采用如下技术方案:
第一方面,本申请实施例提供一种投屏方法,该方法可以应用于第一终端,第一终端与多个第二终端连接,该方法可以包括:第一终端从多个第二终端中每个第二终端接收数据;第一终端根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,多个第一界面与多个第二终端一一对应;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
采用上述技术方案,作为投屏目的端的第一终端,根据多个作为投屏源端的第二终端发送的数据,可在第一终端的显示屏上显示多个第一界面,这多个第一界面与多个第二终端一一对应。第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。实现了多个投屏源端到一个投屏目的端的多对一投屏。这样,如在开会、发布会演示等场景下,多个手机,平板电脑可将其显示屏上的内容(如PPT,所播视频)投射到同一个大屏设备上呈现,实现了多对一的投屏。提高了多设备协同使用的效率,提高了用户的使用体验。
在一种可能的实现方式中,该方法还可以包括:第一终端可以创建多个绘制组件,这多个绘制组件与多个第二终端一一对应。作为一种示例,绘制组件可以为视图或画 布。第一终端根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,可以包括:第一终端根据从多个第二终端接收的数据,在多个绘制组件上分别绘制对应第二终端的第一界面,以在第一终端上显示多个第一界面。通过创建与第二终端对应的视图或画布,用于实现对应第二终端的投屏界面的绘制,为实现多对一投屏做好准备。
在另一种可能的实现方式中,在第一终端根据从多个第二终端接收的数据,在第一终端上显示多个第一界面之前,该方法还可以包括:第一终端配置多个解码参数,这多个解码参数与多个第二终端一一对应;第一终端根据多个解码参数,对从对应第二终端接收的数据进行解码。通过为不同的第二终端配置对应的解码参数,用于对对应的数据进行解码,实现多路解码。
在另一种可能的实现方式中,在第一终端从多个第二终端中每个第二终端接收数据之前,该方法还可以包括:第一终端获取多个第二终端的连接信息,该连接信息用于第一终端与对应第二终端建立连接;其中,多个绘制组件与多个第二终端一一对应,包括:多个绘制组件与多个第二终端的连接信息一一对应;多个解码参数与多个第二终端一一对应,包括:多个解码参数与多个第二终端的连接信息一一对应。
在另一种可能的实现方式中,在第一终端根据从多个第二终端接收的数据,在第一终端上显示多个第一界面之后,该方法还可以包括:第一终端接收用户对第一界面的窗口的第一操作;响应于第一操作,第一终端缩小、放大或关闭窗口,或切换焦点窗口。用户使用投屏目的端的输入设备可对第一界面进行控制,如能够通过设置焦点并根据用户操作在不同源端设备的投屏界面之间切换焦点,又如实现对不同投屏源端的独立控制(如缩小、放大或关闭投屏界面)。投屏目的端还可以根据源端设备的增加或减少进行对呈现的投屏界面的布局进行调整,以给用户呈现最佳的视觉效果。
在另一种可能的实现方式中,在第一终端根据从多个第二终端接收的数据,在第一终端上显示多个第一界面之后,该方法还可以包括:第一终端接收用户对与第二终端对应的第一界面的第二操作;第一终端将第二操作的数据发送给第二终端,用于第二终端根据第二操作显示第三界面。在接收到用户使用投屏目的端的输入设备对第一界面,如投屏界面的操作后,第一终端通过将对应操作的数据发送给该第一界面对应的投屏源端,以便投屏源端做出对应的响应,使得用户使用投屏目的端的输入设备便可实现对投屏源端的反控。
在另一种可能的实现方式中,在第一终端将第二操作的数据发送给第二终端之后,该方法还可以包括:第一终端从第二终端接收更新的数据;第一终端根据更新的数据,将第二终端对应的第一界面更新为第四界面,第四界面的内容是第三界面内容的镜像,或第四界面的内容与第三界面的部分内容相同。在投屏源端的界面发生改变后,可将更新后界面的数据发送给第一终端,以便第一终端可对第一终端显示的对应界面进行更新。
在另一种可能的实现方式中,第一终端还与第三终端建立连接;该方法还可以包括:第一终端将从多个第二终端接收的数据发送给第三终端,用于第三终端显示多个第一界面。作为一种示例,第三终端可以是与第一终端进行畅连通话的终端,第一终端通过将来自投屏源端的数据发送给第三终端,使得与第一终端进行畅连通话的第三 终端也可以显示投屏源端的界面,实现跨地域办公。这种跨地域办公方式,可以提升会议效率,节省跨地域办公的沟通成本。
在另一种可能的实现方式中,该方法还可以包括:第一终端接收来自第三终端的视频数据;第一终端在第一终端显示多个第一界面的同时,根据第三终端的视频数据在第一终端上显示视频通话画面。在另一种可能的实现方式中,该方法还可以包括:第一终端采集视频数据,发送给第三终端,用于第三终端在第三终端上显示多个第一界面的同时,显示视频通话画面。两个地域的终端不仅可展示视频通话画面,同时还可以显示本地和对端投射的内容,进一步提升了会议效率,节省跨地域办公的沟通成本。
第二方面,本申请实施例提供一种投屏方法,该方法可以应用于第二终端,第二终端与第一终端连接,该方法可以包括:第二终端显示第二界面;第二终端接收用户操作;响应于用户操作,第二终端向第一终端发送第二界面的数据,用于第一终端显示与第二终端对应的第一界面,第一终端上还显示有与其他第二终端对应的第一界面;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
采用上述技术方案,作为投屏源端的多个第二终端可根据用户触发将当前界面的数据发送给作为投屏目的端的第一终端,以便第一终端可根据多个第二终端发送的数据,可在第一终端的显示屏上显示多个第一界面,这多个第一界面与多个第二终端一一对应。第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。实现了多个投屏源端到一个投屏目的端的多对一投屏。这样,如在开会、发布会演示等场景下,多个手机,平板电脑可将其显示屏上的内容(如PPT,所播视频)投射到同一个大屏设备上呈现,实现了多对一的投屏。提高了多设备协同使用的效率,提高了用户的使用体验。
在一种可能的实现方式中,上述用户操作可以为开始投屏的操作;在第二终端向第一终端发送第二界面的数据之前,该方法还可以包括:第二终端获取第二界面的数据;其中,在第一界面的内容是第二界面内容的镜像的情况下,第二界面的数据为第二界面的录屏数据;在第一界面的内容与第二界面的部分内容相同的情况下,第二界面的数据为第二界面中预定元素所在图层的录屏数据。在无线投屏场景下,多个第二终端可将各自当前显示的界面或界面中的部分内容投射到第一终端上显示,实现多对一投屏。
在另一种可能的实现方式中,在第一界面的内容与第二界面的部分内容相同的情况下,在第二终端获取第二界面的数据之前,该方法还可以包括:第二终端显示配置界面,配置界面包括图层过滤设置选项;第二终端接收用户对图层过滤设置选项的选中操作。在接收到用户对图层过滤设置选项的选中操作后,作为投屏源端的第二终端可将当前界面中部分元素(如用户拖拽的元素,或预定元素)所在图层投射到投屏目的端,实现图层过滤。这样,可确保投屏源端的隐私信息不被投射到投屏目的端,保护了用户的隐私。
在另一种可能的实现方式中,第二终端接收用户操作,可以包括:第二终端接收用户对第二界面或第二界面中元素的拖拽操作;在第二终端向第一终端发送第二界面 的数据之前,该方法还可以包括:第二终端确定用户的拖拽意图是跨设备拖拽;第二终端获取第二界面的数据。在跨设备拖拽场景中,用户通过拖拽第二终端的界面或界面中的元素,可触发投屏。
在另一种可能的实现方式中,在接收到用户对第二界面中元素的拖拽操作的情况下,该元素可以为视频组件,悬浮窗,画中画或自由小窗,第二界面的数据为元素所在图层的录屏数据;或,该元素为第二界面中的用户界面(UI)控件,第二界面的数据为第二界面的指令流和UI控件的标识,或第二界面的数据为UI控件的绘制指令和标识。在投射界面中的UI控件的场景中,可将需投射的内容对应的指令流发送给投屏目的端实现投屏,这样,可提高投屏目的端投屏界面的显示效果,还可节约传输带宽。
第三方面,本申请实施例提供一种投屏装置,该装置可以应用于第一终端,第一终端与多个第二终端连接,该装置可以包括:接收单元,用于从多个第二终端中每个第二终端接收数据;显示单元,用于根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,多个第一界面与多个第二终端一一对应;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
在一种可能的实现方式中,该装置还可以包括:创建单元,用于创建多个绘制组件,多个绘制组件与多个第二终端一一对应,绘制组件为视图或画布;显示单元根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,可以包括:根据从多个第二终端接收的数据,在多个绘制组件上分别绘制对应第二终端的第一界面,以在第一终端上显示多个第一界面。
在另一种可能的实现方式中,该装置还可以包括:配置单元,用于配置多个解码参数,多个解码参数与多个第二终端一一对应;解码单元,用于根据多个解码参数,对从对应第二终端接收的数据进行解码。
在另一种可能的实现方式中,该装置还可以包括:获取单元,用于获取多个第二终端的连接信息,连接信息用于第一终端与对应第二终端建立连接;其中,多个绘制组件与多个第二终端一一对应,包括:多个绘制组件与多个第二终端的连接信息一一对应;多个解码参数与多个第二终端一一对应,包括:多个解码参数与多个第二终端的连接信息一一对应。
在另一种可能的实现方式中,该装置还可以包括:输入单元,用于接收用户对第一界面的窗口的第一操作;显示单元,还用于响应于第一操作,缩小、放大或关闭窗口,或切换焦点窗口。
在另一种可能的实现方式中,输入单元,还用于接收用户对与第二终端对应的第一界面的第二操作;该装置还可以包括:发送单元,用于将第二操作的数据发送给第二终端,用于第二终端根据第二操作显示第三界面。
在另一种可能的实现方式中,接收单元,还用于从第二终端接收更新的数据;显示单元,还用于根据更新的数据,将第二终端对应的第一界面更新为第四界面,第四界面的内容是第三界面内容的镜像,或第四界面的内容与第三界面的部分内容相同。
在另一种可能的实现方式中,第一终端还与第三终端建立连接;发送单元,还用于将从多个第二终端接收的数据发送给第三终端,用于第三终端显示多个第一界面。
在另一种可能的实现方式中,接收单元,还用于接收来自第三终端的视频数据;显示单元,还用于在第一终端显示多个第一界面的同时,根据第三终端的视频数据在第一终端上显示视频通话画面。
在另一种可能的实现方式中,该装置还可以包括:采集单元,用于采集视频数据;发送单元,还用于发送视频数据给第三终端,用于第三终端在第三终端上显示多个第一界面的同时,显示视频通话画面。
第四方面,本申请实施例提供一种投屏装置,该装置可以应用于第二终端,第二终端与第一终端连接,该装置可以包括:显示单元,用于显示第二界面;输入单元,用于接收用户操作;发送单元,用于响应于用户操作,向第一终端发送第二界面的数据,用于第一终端显示与第二终端对应的第一界面,第一终端上还显示有与其他第二终端对应的第一界面;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
在一种可能的实现方式中,用户操作为开始投屏的操作;该装置还可以包括:获取单元,用于获取第二界面的数据;其中,在第一界面的内容是第二界面内容的镜像的情况下,第二界面的数据为第二界面的录屏数据;在第一界面的内容与第二界面的部分内容相同的情况下,第二界面的数据为第二界面中预定元素所在图层的录屏数据。
在另一种可能的实现方式中,显示单元,还用于显示配置界面,配置界面包括图层过滤设置选项;输入单元,还用于接收用户对图层过滤设置选项的选中操作。
在另一种可能的实现方式中,输入单元接收用户操作,可以包括:输入单元接收用户对第二界面或第二界面中元素的拖拽操作;该装置还可以包括:确定单元,用于确定用户的拖拽意图是跨设备拖拽;获取单元,还用于获取第二界面的数据。
在另一种可能的实现方式中,在接收到用户对第二界面中元素的拖拽操作的情况下,该元素可以为视频组件,悬浮窗,画中画或自由小窗,第二界面的数据为元素所在图层的录屏数据;或,该元素可以为第二界面中的用户界面UI控件,第二界面的数据为第二界面的指令流和UI控件的标识,或第二界面的数据为UI控件的绘制指令和标识。
第五方面,本申请实施例提供一种投屏方法,应用于第一终端,该方法可以包括:第一终端显示第一应用的界面;第一终端接收第一操作;响应于第一操作,第一终端向第二终端发送第一应用的界面的数据,用于第二终端显示第一界面,第一界面的内容是第一应用的界面内容的镜像,或第一界面的内容与第一应用的界面的部分内容相同;第一终端接收第二操作;响应于第二操作,第一终端显示第二应用的界面;第一终端接收第三操作;在第一终端向第二终端投射第一应用的界面的情况下,响应于第三操作,第一终端向第三终端发送第二应用的界面的数据,用于第三终端在显示第二界面,第二界面的内容是第二应用的界面内容的镜像,或第二界面的内容与第二应用的界面的部分内容相同。
采用上述技术方案,作为投屏源端的第一终端可通过创建多路媒体流,实现该第一终端的多个应用的内容到一个或多个投屏目的端的投射,满足了多任务并行的需求,这样可提高终端的使用效率,提升用户的使用体验。
在一种可能的实现方式中,该方法还可以包括:第一终端创建第一虚拟显示;第 一终端将第一应用的界面或第一应用的界面中第一元素绘制到第一虚拟显示,以获取第一应用的界面的数据;第一终端创建第二虚拟显示;第一终端将第二应用的界面或第二应用的界面中第二元素绘制到第二虚拟显示,以获取第二应用的界面的数据。这样,通过创建虚拟显示,并基于虚拟显示对投屏源端的内容进行屏幕录制,以实现投屏源端内容在投屏目的端的显示,支持镜像投屏和异源投屏。
在另一种可能的实现方式中,该方法还可以包括:第一终端向第二终端发送第一应用的音频数据,用于第二终端输出对应音频;第一终端向第三终端发送第二应用的音频数据,用于第三终端输出对应音频。这样,支持投屏源端音频数据到投屏目标端的投射显示。
在另一种可能的实现方式中,该方法还可以包括:第一终端创建第一录音(AudioRecord)对象,基于第一AudioRecord对象录制获得第一应用的音频数据;第一终端创建第二AudioRecord对象,基于第二AudioRecord对象录制获得第二应用的音频数据。这样,通过创建AudioRecord对象,并基于AudioRecord对象对投屏源端的音频进行录制,以实现投屏源端音频数据在投屏目的端的输出。
在另一种可能的实现方式中,第二终端与第三终端相同。
第六方面,本申请实施例提供一种投屏方法,应用于第二终端,该方法可以包括:第二终端接收来自第一终端的第一应用的界面的数据;第二终端显示第一界面,第一界面的内容是第一应用的界面内容的镜像,或第一界面的内容与第一应用的界面的部分内容相同;第二终端接收来自第一终端的第二应用的界面的数据;第二终端显示第三界面,第三界面包括第一界面的内容和第二界面的内容,第二界面的内容是第二应用的界面内容的镜像,或第二界面的内容与第二应用的界面的部分内容相同。
采用上述技术方案,作为投屏目的端的第二终端可接收来自作为投屏源端的第一终端的多路媒体流,实现第一终端的多个应用的内容到第二终端的投射,满足了多任务并行的需求,这样可提高终端的使用效率,提升用户的使用体验。
第七方面,本申请实施例提供一种投屏装置,应用于第一终端,该装置可以包括:显示单元,用于显示第一应用的界面;输入单元,用于接收第一操作;发送单元,用于响应于第一操作,向第二终端发送第一应用的界面的数据,用于第二终端显示第一界面,第一界面的内容是第一应用的界面内容的镜像,或第一界面的内容与第一应用的界面的部分内容相同;输入单元,还用于接收第二操作;显示单元,还用于响应于第二操作,显示第二应用的界面;输入单元,还用于接收第三操作;发送单元,还用于在第一终端向第二终端投射第一应用的界面的情况下,响应于第三操作,向第三终端发送第二应用的界面的数据,用于第三终端在显示第二界面,第二界面的内容是第二应用的界面内容的镜像,或第二界面的内容与第二应用的界面的部分内容相同。
在一种可能的实现方式中,该装置还可以包括:创建单元,用于创建第一虚拟显示;绘制单元,用于将第一应用的界面或第一应用的界面中第一元素绘制到第一虚拟显示,以获取第一应用的界面的数据;创建单元,还用于创建第二虚拟显示;绘制单元,还用于将第二应用的界面或第二应用的界面中第二元素绘制到第二虚拟显示,以获取第二应用的界面的数据。
在另一种可能的实现方式中,发送单元,还用于向第二终端发送第一应用的音频 数据,用于第二终端输出对应音频;向第三终端发送第二应用的音频数据,用于第三终端输出对应音频。
在另一种可能的实现方式中,创建单元,还用于创建第一AudioRecord对象;该装置还可以包括:录制单元,用于基于第一AudioRecord对象录制获得第一应用的音频数据;创建单元,还用于创建第二AudioRecord对象;录制单元,还用于基于第二AudioRecord对象录制获得第二应用的音频数据。
在另一种可能的实现方式中,第二终端与第三终端相同。
第八方面,本申请实施例提供一种投屏装置,应用于第二终端,该装置可以包括:接收单元,用于接收来自第一终端的第一应用的界面的数据;显示单元,用于显示第一界面,第一界面的内容是第一应用的界面内容的镜像,或第一界面的内容与第一应用的界面的部分内容相同;接收单元,还用于接收来自第一终端的第二应用的界面的数据;显示单元,还用于显示第三界面,第三界面包括第一界面的内容和第二界面的内容,第二界面的内容是第二应用的界面内容的镜像,或第二界面的内容与第二应用的界面的部分内容相同。
第九方面,本申请实施例提供一种投屏装置,该装置可以包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令时使得投屏装置实现如第一方面或第一方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第二方面或第二方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第五方面或第五方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第六方面所述的方法。
第十方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序指令,计算机程序指令被电子设备执行时使得电子设备实现如第一方面或第一方面可能的实现方式中任一项所述的方法,或者实现如第二方面或第二方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第五方面或第五方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第六方面所述的方法。
第十一方面,本申请实施例提供一种投屏***,该***可以包括第一终端和多个第二终端;多个第二终端中的每个第二终端,用于显示第二界面;在接收到用户操作后,向第一终端发送第二界面的数据;第一终端,用于从多个第二终端中每个第二终端接收数据;根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,多个第一界面与多个第二终端一一对应;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
第十二方面,本申请实施例提供一种电子设备(如上述第一终端或第二终端),该电子设备包括显示屏,一个或多个处理器和存储器;显示屏,处理器和存储器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被电子设备执行时,使得该电子设备执行如第一方面或第一方面可能的实现方式中任一项所述的方法,或者执行如第二方面或第二方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第五方面或第五方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第六方面所述的方法。
第十三方面,本申请实施例提供一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,电子设备中的处理器执行第一方面或第一方面的可能的实现方式中任一项所述的方法,或者执行如第二方面或第二方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第五方面或第五方面可能的实现方式中任一项所述的方法,或者使得投屏装置实现如第六方面所述的方法。
可以理解地,上述提供的第三方面及其任一种可能的实现方式所述的投屏装置,第四方面及其任一种可能的实现方式所述的投屏装置,第七方面所述的投屏装置,第八方面所述的投屏装置,第九方面所述的投屏装置,第十方面所述的计算机可读存储介质,第十一方面所述的投屏***,第十二方面所述的电子设备,及第十三方面所述的计算机程序产品所能达到的有益效果,可参考如第一方面或第二方面或第五方面或第六方面及其任一种可能的实现方式中的有益效果,此处不再赘述。
附图说明
图1A为本申请实施例提供的一种场景示意图;
图1B为本申请实施例提供的一种***架构的简化示意图;
图2为本申请实施例提供的一种手机的结构示意图;
图3为本申请实施例提供的一种软件架构的组成示意图;
图4为本申请实施例提供的一种投屏方法的流程示意图;
图5为本申请实施例提供的一种显示界面示意图;
图6为本申请实施例提供的另一种投屏方法的流程示意图;
图7为本申请实施例提供的另一种显示界面示意图;
图8为本申请实施例提供的又一种显示界面示意图;
图9为本申请实施例提供的又一种投屏方法的流程示意图;
图10为本申请实施例提供的又一种显示界面示意图;
图11为本申请实施例提供的又一种显示界面示意图;
图12为本申请实施例提供的又一种显示界面示意图;
图13为本申请实施例提供的又一种显示界面示意图;
图14为本申请实施例提供的又一种显示界面示意图;
图15为本申请实施例提供的又一种投屏方法的流程示意图;
图16为本申请实施例提供的又一种显示界面示意图;
图17为本申请实施例提供的又一种显示界面示意图;
图18为本申请实施例提供的又一种显示界面示意图;
图19为本申请实施例提供的又一种显示界面示意图;
图20为本申请实施例提供的又一种显示界面示意图;
图21为本申请实施例提供的又一种显示界面示意图;
图22为本申请实施例提供的又一种显示界面示意图;
图23为本申请实施例提供的又一种显示界面示意图;
图24为本申请实施例提供的又一种显示界面示意图;
图25为本申请实施例提供的又一种显示界面示意图;
图26为本申请实施例提供的又一种显示界面示意图;
图27为本申请实施例提供的一种投屏装置的组成示意图;
图28为本申请实施例提供的另一种投屏装置的组成示意图;
图29为本申请实施例提供的另一种软件架构的组成示意图;
图30为本申请实施例提供的又一种显示界面示意图;
图31为本申请实施例提供的一种数据传输示意图;
图32为本申请实施例提供的另一种数据传输示意图;
图33为本申请实施例提供的又一种数据传输示意图;
图34为本申请实施例提供的一种芯片***的组成示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
近年来,消费类电子产品呈现出了爆发式增长,为了能够给用户提供更佳的使用体验,如图1A所示,华为提出了“1+8+N”第五代移动通信技术(fifth-generation,5G)全场景战略。其中,“1”指智能手机,“8”指8个大行星,“N”指N颗卫星。“8个大行星”分别是电视(television,TV)、音响、眼镜、手表、车机、耳机、个人电脑(personal computer,PC)、平板(PAD)。围绕在8个大行星的周边是合作伙伴开发的N颗卫星,分别是移动办公、智能家居、运动健康、影音娱乐及智慧出行各大板块的延伸业务及生态。
目前,在上述场景下,为了提高办公效率,用户可将多个终端连接起来一起配合使用。例如,在两个终端连接后,利用多屏协同可实现这两个终端间的协同办公。多屏协同可利用镜像投屏方式,将一个终端显示的界面投射到另一个终端的显示屏上显示。在本实施例中,可以将投射其显示界面的终端称为投屏源端,或称为源(source)端,接收投屏源端的投射并显示投屏源端显示界面的终端称为投屏目的端,或称为接收(sink)端。将投屏目的端上显示的投屏源端投射的界面称为投屏界面,将投屏目标端用于显示投屏界面的窗口称为投屏窗口。
当前利用镜像投屏方式仅能实现一个终端的显示界面到另一个终端的显示,即仅能实现一对一的投屏。但是,在如开会、发布会演示等场景下,可能需要多个终端的显示界面在同一个终端(如,大屏设备)上的呈现,即存在多对一的投屏需求。在相关技术中,可借助无线投屏器(如AWIND奇机
Figure PCTCN2021135158-appb-000001
无线投影网关)实现多个终端的界面到一个终端显示屏上的投射。但是,这种实现多对一投屏的技术需要借助对应的无线投屏器。
本申请实施例提供一种投屏方法,可应用于投屏场景下。采用本实施例提供的方法,无需借助其他设备,可实现多个终端的显示界面到同一个终端显示屏上的显示,满足了开会、发布会演示等场景中的多对一投屏需求,提高了多终端协同使用的效率,提高了用户的使用体验。
另外,当前实现多屏协同采用的技术主要是DLNA、Miracast及AirPlay。
其中,DLNA的宗旨是“随时随地享受音乐、照片和视频”。DLNA定义了两种设备,分别为数字媒体服务器(digital media server,DMS)和数字媒体播放器(digital media player,DMP)。DMS提供了媒体流的获取、录制、存储以及作为源头的能力提供,如向多种DMP提供内容以及与网络中其他设备共享内容。DMS可以看作多媒体网盘。DMP可查找并播放由DMS提供的任何媒体文件。通常电脑和电视均支持DLNA,需要用户手动开启。DLNA没有连接状态,默认连接到同一局域网内即可连接成功。目前,DLNA仅支持多媒体文件(如图片,音频,视频)投送,投送之后DMS显示控制界面,不同步播放。另外,DLNA只是将手机的图片、音频或视频投送到大屏上显示或播放,对于在线视频,需要第三方应用支持,并且需要电视(盒)或大屏支持DLNA。由于DLNA本质上是推送一个资源的统一资源定位符(uniform resource locator,URL),因此当多个设备作为DMS投送内容到同一作为DMP的目标设备时,采用抢占式,即哪个设备最后投送内容,目标设备就播放它的媒体文件。
Miracast是由无线保真(wireless fidelity,Wi-Fi)联盟于2012年所制定,以Wi-Fi直连为基础的无线显示标准。Miracast是一种镜像投屏,即投屏源端和投屏目的端的界面完全相同,适合远程共享。支持此标准的设备可通过无线方式分享视频画面。例如,手机可通过Miracast将影片或照片直接在电视等大屏上播放而无需受到连接线缆长度的影响。但,Miracast需要配件支持。且并不是所有的设备都支持Miracast。如,PC从Windows 8.1开始才支持Miracast,安装低版本Windows***的PC并不支持。另外,镜像投屏需要发送大量实时编码的数据流,对网络质量有较高的要求,在丢包严重的WiFi环境下,会出现卡顿、花屏等现象。
AirPlay是苹果
Figure PCTCN2021135158-appb-000002
开发的一种无线技术,可以通过Wi-Fi将iOS设备上的图片、音频或视频通过无线的方式传输到支持AirPlay的设备上。AirPlay具备DLNA所没有的镜像功能,可将如手机,平板等iOS设备上的画面无线传输到电视上。也就是说,iOS设备上显示什么,电视屏幕上就显示什么,且不仅限于图片和视频。但,AirPlay只适用于苹果
Figure PCTCN2021135158-appb-000003
认证过的设备或授权的合作伙伴的设备。另外,AirPlay不开源,与
Figure PCTCN2021135158-appb-000004
设备的交互也有局限性。
随着用户使用的如手机等终端越来越多样化,用户对多任务并行的需求越来越迫切。而目前,由于手机等终端的操作***,如
Figure PCTCN2021135158-appb-000005
***并不是真正的多任务***,因此无法做到多个应用(App)同时运行,也即无法满足多任务并行的需求。鉴于此,考虑可借助其他手机、PAD、电视、PC等各种带屏设备,将手机上多个应用对应内容投射到其他设备上显示,或者说通过将手机上的多个应用“转移”到其他带屏设备上,以实现多任务并行的需求。但,上述用于实现多屏协同的技术仅能实现设备的一个应用对应内容到另一个设备的投射显示,即仅能实现一对一投屏,或者说仅能实现设备的一个应用到其他设备上的“转移”,并无法实现真正的多任务并行。
本申请实施例提供的投屏方法,终端还可通过创建多路媒体流,实现该终端的一个或多个应用的内容到其他终端上的投射显示,以满足多任务并行的需求,提高终端的使用效率,提升用户的使用体验。
下面将结合附图对本申请实施例的实施方式进行详细描述。
图1B示出的是可以应用本申请实施例的***架构的简化示意图。如图1B所示, 该***架构可以包括:第一终端101和至少一个第二终端102。
其中,针对每个第二终端102,其与第一终端101可通过有线或无线的方式建立连接。基于建立的连接,第一终端101和第二终端102可配合一起使用。在本实施例中,第一终端101和第二终端102采用无线方式建立连接时采用的无线通信协议可以为Wi-Fi协议、蓝牙(Bluetooth)协议、ZigBee协议、近距离无线通信(Near Field Communication,NFC)协议等,还可以是各种蜂窝网协议,在此不做具体限制。不同第二终端102与第一终端101建立连接时采用的无线通信协议可以相同,也可以不同。
在本申请一些实施例中,在第一终端101与多个第二终端102连接后,第一终端101和多个第二终端102中的投屏源端可将其显示屏上显示的界面或界面中的部分元素投射到投屏目的端显示屏上显示。如,以第一终端101作为投屏目的端,多个第二终端102均作为投屏源端为例。多个第二终端102中的每个第二终端102可将其显示屏上显示的界面或界面中的部分元素均投射到第一终端101的显示屏上显示。如,第一终端101可将多个第二终端102的界面聚合后显示在第一终端101的显示屏上供用户查看。用户还可使用第一终端101的输入设备,在第一终端101显示屏上显示的各第二终端102对应的投屏界面上进行操作,以实现对对应第二终端102中显示的实际界面的操作。
在本申请其他一些实施例中,在第一终端101与第二终端102连接后,第一终端101和第二终端102中的投屏源端可通过创建多路媒体流,将其一个或多个应用的内容投射到投屏目的端显示屏上显示。如,以第一终端101作为投屏源端,至少一个第二终端102作为投屏目的端为例。第一终端101可通过创建多路媒体流,将第一终端101中一个或多个应用的内容投射到至少一个第二终端102的显示屏上显示,以满足多任务并行的需求。如第一终端101可通过创建多路媒体流,将第一终端101中多个应用的内容投射到一个或多个第二终端102的显示屏上显示。又如,第一终端101可通过创建多路媒体流,将第一终端101中一个应用的内容投射到多个第二终端102的显示屏上显示。
需要说明的是,本申请实施例中的终端,如上述第一终端101,又如上述第二终端102,可以为手机,平板电脑,手持计算机,个人电脑(personal computer,PC),蜂窝电话,个人数字助理(personal digital assistant,PDA),可穿戴式设备(如智能手表),车载电脑,游戏机,以及增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等,本实施例对终端的具体形式不做特殊限制。另外,本实施例提供的技术方案除了可以应用于上述终端(或者说移动终端)外,还可以应用于其他电子设备,如智能家居设备(如电视机)等。其中,第一终端101和第二终端102的设备形态可以相同,也可以不同。当上述***架构包括多个第二终端102时,多个第二终端102的设备形态可以相同,也可以不同,本实施例在此不做限制。作为一种示例,第一终端101可以是PC,电视机等大屏设备,第二终端102可以为手机,平板电脑等移动设备。作为又一种示例,第一终端101可以是手机,平板等移动设备,第二终端102可以为PC,电视等大屏设备。其中,图1B中以第一终端101为电视机,多个第二终端102均为手机为例示出,但本实施例不限于此。
在本实施例中,以终端为手机为例。请参考图2,为本申请实施例提供的一种手 机的结构示意图。以下实施例中的方法可以在具有上述硬件结构的手机中实现。
如图2所示,手机可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193以及显示屏194等。可选的,手机还可以包括移动通信模块150,用户标识模块(subscriber identification module,SIM)卡接口195等。
其中,传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本实施例示意的结构并不构成对手机的具体限定。在另一些实施例中,手机可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是手机的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,SIM接口,和/或USB接口等。
充电管理模块140用于从充电器接收充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为手机供电。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141也可接收电池142的输入为手机供电。
手机的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机中的每个天线可用于覆盖单个 或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
当手机包括移动通信模块150时,移动通信模块150可以提供应用在手机上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在手机上的包括无线局域网(wireless local area networks,WLAN)(如Wi-Fi网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),NFC,红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,手机的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得手机可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
手机通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。处理器110可包括一个或多个GPU,其 执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机可以包括1个或N个显示屏194,N为大于1的正整数。
手机可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。在一些实施例中,手机可以包括1个或N个摄像头193,N为大于1的正整数。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
手机可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。当有触摸操作作用于显示屏194,手机根据压力传感器180A检测所述触摸操作强度。手机也可以根据压力传感器180A的检测信号计算触摸的位置。
陀螺仪传感器180B可以用于确定手机的运动姿态。气压传感器180C用于测量气压。磁传感器180D包括霍尔传感器。手机可以利用磁传感器180D检测翻盖皮套的开合。加速度传感器180E可检测手机在各个方向上(一般为三轴)加速度的大小。距离传感器180F,用于测量距离。手机可以利用接近光传感器180G检测用户手持手机贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。环境光传感器180L用于感知环境光亮度。指纹传感器180H用于采集指纹。手机可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。温度传感器180J用于检测温度。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用 于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
当手机包括SIM卡接口195时,SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和手机的接触和分离。手机可以支持1个或N个SIM卡接口,N为大于1的正整数。手机通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,手机采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在手机中,不能和手机分离。
结合图1B,本申请实施例示例性说明第一终端101和第二终端102的软件架构。其中,以第一终端101作为投屏目的端,第二终端102作为投屏源端为例。请参考图3,为本申请实施例提供的一种软件架构的组成示意图。
作为一种示例,第一终端101和第二终端102的软件架构均可以包括:应用层和框架层(framework,FWK)。
其中,如图3所示,第一终端101可以包括:网络管理模块,解码模块和窗口管理模块。第一终端101包括的各模块可以包含于第一终端101的软件架构的任意一层。如第一终端101的网络管理模块,解码模块和窗口管理模块均包含于第一终端101的框架层,本实施例在此不做具体限制。第一终端101还可以包括应用程序,可以包含于上述应用层。应用程序可以包括投屏应用,该投屏应用可辅助作为投屏目的端的第一终端101实现多对一投屏功能。
第二终端102可以包括:网络管理模块,编码模块和设置模块。第二终端102包括的各模块可以包含于第二终端102的软件架构的任意一层。如第二终端102的网络管理模块和编码模块包含于第二终端102的框架层。第二终端102的设置模块包含于第二终端102的应用层,本实施例在此不做具体限制。第二终端102还可以包括应用程序,可以包含于上述应用层。应用程序可以包括投屏应用,该投屏应用可辅助作为投屏源端的第二终端102实现多对一投屏功能。
在本实施例中,第一终端101的网络管理模块可负责第一终端101与第二终端102之间传输通道的建立。其中,第一终端101的网络管理模块可支持第一终端101与多个第二终端102之间传输通道的建立,即支持1对N的建连。第一终端101的解码模块可负责对来自作为投屏源端的第二终端102的数据(如称为投屏数据,还可以称为录屏数据)进行解码。该解码模块支持多路解码。如,对于来自不同第二终端102的数据,第一终端101的解码模块可使用不同的解码参数对相应数据进行解码。第一终端101的窗口管理模块可负责根据解码后的多路数据,在第一终端101上呈现多个投屏窗口。该多个投屏窗口与多个第二终端102一一对应。投屏窗口中的内容与对应第二终端102所呈现界面的全部或部分内容相同。第一终端101的窗口管理模块还可负 责动态增加、减少第一终端101上的投屏窗口,根据用户操作对第一终端101上呈现的投屏窗口进行缩小,放大,切换焦点窗口等。
第二终端102的网络管理模块可负责第二终端102与第一终端101之间传输通道的建立。第二终端102的编码模块可负责对当前显示界面或界面中的部分元素对应数据(如称为投屏数据)进行编码。第二终端102的设置模块可负责根据用户的设置对音视频参数进行设置,音视频参数可以包括分辨率,横竖屏,同源/异源,图层过滤等。其中,同源/异源可以是指第二终端102投屏后是否在第二终端102继续显示当前界面,如同源则指第二终端102投屏后在第二终端102继续显示当前界面,异源则指第二终端102投屏后不在第二终端102继续显示当前界面。
以下结合图1B和图3,以第一终端101为电视机,多个第二终端102均为手机(如多个第二终端102包括:手机1和手机2)为例,结合附图对本申请实施例提供的投屏方法进行详细介绍。
图4为本申请实施例提供的一种投屏方法的流程示意图。如图4所示,该方法可以包括以下S401-S406。
S401、手机1与电视机建立连接,手机2与电视机建立连接。
在用户想将多个终端(如称为第二终端,如上述手机1和手机2)的显示界面投射到同一个终端(如称为第一终端,如上述电视机)上显示,实现多对一的投屏时,可将这多个第二终端分别与该第一终端建立连接。
其中,第一终端与第二终端建立连接的方式可以有多种。在一些实施例中,第一终端与第二终端可以采用有线的方式建立连接。例如,手机1与电视机可通过数据线建立有线连接。又例如,手机2与电视机可通过数据线建立有线连接。
在其他一些实施例中,第一终端与第二终端可以采用无线的方式建立连接。其中,终端之间采用无线方式建立连接有两点要求,一个是终端之间互相知晓对端的连接信息,另一个是各终端具有传输能力。连接信息可以是终端的设备标识,如互联网协议(internet protocol,IP)地址,端口号或终端登录的账号等。终端登录的账号可以是运营商为用户提供的账号,如华为账号等。终端登录的账号还可以为应用账号,如微信
Figure PCTCN2021135158-appb-000006
账号、优酷
Figure PCTCN2021135158-appb-000007
账号等。终端具有传输能力可以是近场通信能力,也可以是长距离通信能力。也就是说,终端间,如手机1(或手机2)与电视机建立连接采用的无线通信协议可以是如Wi-Fi协议或蓝牙协议或NFC协议等近场通信协议,也可以是蜂窝网协议。
需要说明的是,不同第二终端与第一终端建立连接的方式可以相同,也可以不同,如电视机与手机1建立连接的方式,和与手机2建立连接的方式可以相同,也可以不同,本实施例在此不做具体限制。
示例性的,结合图1B和图3,在本实施例中,以多个第二终端均与第一终端采用无线方式建立连接为例。用户想实现多个第二终端到第一终端的多对一投屏,即多个第二终端,如手机1和手机2为投屏源端,第一终端,如电视机为投屏目的端。用户可手动开启作为投屏目的端的电视机的投屏服务功能(也可以称为多对一投屏功能)。电视机的投屏服务功能也可以自动开启,如在电视机开机启动时自动开启。在电视机的投屏服务功能开启后,电视机可获取各投屏源端(如手机1和手机2)的连接信息, 如IP地址。
其中,电视机可通过以下方式获取作为投屏源端的各第二终端的连接信息。
方式1、各第二终端的连接信息可以是用户手动输入的。示例性的,在电视机的投屏服务功能开启后,电视机可显示配置界面1,供用户输入各第二终端的连接信息,如IP地址。在用户输入各第二终端的连接信息后,电视机可获得该各第二终端的连接信息。其中,配置界面1中,供用户输入连接信息的控件(如输入框)数量可以是固定(如2个,3个或者更多,本实施例不做具体限制)的。用户可在该控件中输入第二终端的连接信息。用户输入的连接信息数量可以等于或小于控件的数量。可以理解的是,用户输入的连接信息的数量与电视机可连接的投屏源端的数量相同。
例如,以配置界面1中供用户输入连接信息的输入框包括两个为例。如图5所示,在电视机的投屏服务功能开启后,电视机可显示配置界面501,该配置界面501中包括供用户输入连接信息的输入框502和输入框503。用户可在输入框502和输入框503中分别输入作为投屏源端的第二终端的连接信息,如IP地址。如用户在输入框502中输入手机1的IP地址:192.168.43.164,在输入框503中输入手机2的IP地址:192.168.43.155。之后,电视机可从配置界面501中获得各第二终端的连接信息。例如,在用户完成输入后,可对配置界面501中的聚合按钮504进行操作,如点击操作。电视机接收到该操作后,便可从配置界面501中获取各第二终端的连接信息,如IP地址:192.168.43.164和IP地址:192.168.43.155。如,可由电视机的窗口管理模块从配置界面501中获得IP地址:192.168.43.164和IP地址:192.168.43.155。
方式2、作为投屏源端的各第二终端的连接信息可以是电视机监听到的。示例性的,手机1、手机2和电视机均打开了蓝牙功能。在电视机的投屏服务功能开启后,电视机可开始执行设备发现过程。如电视机开启蓝牙监听。在作为投屏源端的第二终端,如手机1和手机2的蓝牙功能开启的情况下,其可发送蓝牙广播。电视机可接收到第二终端发送的蓝牙广播。在电视机进行设备发现的过程中也可与发现的设备(如上述第二终端)互相交换连接信息,如IP地址。如,电视机可向第二终端,如手机1和手机2分别发送通知消息,以通知其上报自身的IP地址。之后,电视机(如电视机的网络管理模块)可接收来自第二终端,如手机1和手机2的IP地址。
可以理解的是,电视机开启蓝牙监听后,处于监听范围内的所有终端发送的蓝牙广播电视机均可监听到。在一些实施例中,电视机可向所有监听到的终端发送上述通知消息,以便其上报自身的连接信息。如电视机监听到手机1和手机2的蓝牙广播,其向手机2和手机2均发送上述通知消息。在其他一些实施例中,电视机在监听到终端的蓝牙广播后,可显示发现设备列表。该发现设备列表中包括电视机监听到的所有终端的标识,如包括手机1的标识和手机2的标识。该发现设备列表供用户选择想要与电视机连接的终端。电视机可仅向用户选择的终端发送上述通知消息。如,用户选择了发现设备列表中手机1的标识和手机2的标识,则电视机可将手机1和手机2发送上述通知消息。
电视机在获取到各第二终端的连接信息后,可根据获得的各连接信息,与对应的第二终端建立连接。其中,电视机与各第二终端建立连接时采用的无线通信协议可以相同,也可以不同,本实施例在此不做具体限制。如,电视机可根据手机1的IP地址 192.168.43.164,与手机1采用Wi-Fi协议建立连接,根据手机2的IP地址192.168.43.155,与手机2采用Wi-Fi协议建立连接。又如,电视机可根据手机1的IP地址192.168.43.164,与手机1采用Wi-Fi协议建立连接,根据手机2的IP地址192.168.43.155,与手机2采用蓝牙协议建立连接。
作为一种示例,结合图3,电视机与第二终端(如手机1或手机2)建立连接的过程可以是:电视机的网络管理模块根据IP地址,向第二终端发起网络连接,如发送建立连接请求。第二终端的网络管理模块响应该建立连接请求,完成与电视机连接的建立。需要说明的是,在电视机通过上述方式1获得各第二终端的连接信息的场景中,各第二终端的连接信息具体的是由电视机的窗口管理模块获取的。在该场景中,电视机的窗口管理模块可将获得的各第二终端的连接信息发送给电视机的网络管理模块,用于电视机的网络管理模块发起网络连接。
S402、电视机创建与手机1和手机2分别对应的视图,配置与手机1和手机2分别对应的解码参数。
可以理解的是,在第二终端与第一终端连接的情况下,作为投屏源端的终端可将其显示屏上显示的界面投射到作为投屏目的端的终端显示屏上显示。结合S401中的描述,在本实施例中,多个第二终端均作为投屏源端,第一终端作为投屏目的端,即多个第二终端均可将其显示屏上显示的界面投射到第一终端的显示屏上显示,实现多对一投屏。为了实现多对一的投屏目的,在本实施例中,作为投屏目的端的第一终端可进行如下准备工作:
针对多个第二终端中的每个第二终端,第一终端在获取到该第二终端的连接信息后,或在与该第二终端连接成功后,可创建对应的视图(view),用于渲染该第二终端投射的界面。其中,上述视图可以为本申请实施例中的绘制组件。
示例性的,结合图3,以各第二终端的连接信息是用户手动输入的为例,如图6所示,第一终端在显示配置界面1后,用户可通过配置界面1输入各第二终端的连接信息,如IP地址。第一终端,如第一终端的窗口管理模块可从配置界面1中获得该各第二终端的IP地址(如,图6中的步骤1)。在获取到各第二终端的IP地址,或与各第二终端连接成功后,第一终端可在本地保存一个数组,如称为数组1。该数组1中包括作为投屏源端的各第二终端的IP地址。第一终端可根据该数组1,为作为投屏源端的各第二终端分别创建一个对应的view,用于渲染各第二终端投射的界面。如,由第一终端的窗口管理模块创建一个view数组,该view数组可以包括:与数组1中的IP地址一一对应view(如,图6中的步骤2)。
第一终端为多个第二终端中的每个第二终端配置解码参数,用于对来自各第二终端的投屏数据进行解码。
可以理解的是,投屏源端将当前显示的界面投射到投屏目的端的具体实现可以是,投屏源端获取当前显示界面对应的数据,如称为投屏数据,并发送给投屏目的端,以便投屏目的端在其显示屏上显示对应内容。一般的,在投屏源端传输投屏数据之前,对投屏数据可进行编码,并将编码后的投屏数据传输给投屏目的端。对应的,对于投屏目的端而言,其在接收到来自投屏源端的投屏数据后,可对其进行解码。
在本实施例中,对于作为投屏源端的多个第二终端,第一终端可采用相同的解码 参数对来自不同第二终端的投屏数据进行解码,也可以采用不同的解码参数对来自不同第二终端的投屏数据进行解码。在采用不同的解码参数对来自不同第二终端的投屏数据进行解码的场景中,继续结合图6,在第一终端的窗口管理模块成功创建每个IP地址对应的view后,第一终端的窗口管理模块可在第一终端的解码模块中配置与对应IP地址关联的解码参数(如,图6中的步骤3)。如,第一终端的窗口管理模块可在view创建成功后,通过回调函数在解码模块中配置与对应IP地址关联的解码参数。这样,第一终端可为各第二终端配置不同的解码参数,用于对来自各第二终端的投屏数据进行解码。其中,上述解码参数可以是第一终端与第二终端协商出来的,也可以是预先配置在第一终端的,本实施例在此不做具体限制。
作为一种示例,上述解码参数可以包括:视频流的分配模式,视频流的规格,视频编码格式,视频编码的码率,虚拟显示(Virtual Display)的标志,是否投射音频数据等。其中,视频流的分配模式可以包括广播模式,分发模式,汇聚模式等。广播模式可以指仅启动单路视频流以低时延分布到多个投屏目的端。分发模式可以是指启动多路视频流分别分布到多个不同的投屏目的端。汇聚模式可以是指启动多路视频流分布到同一个投屏目的端。视频流的规格可以是指视频编码器的分辨率,如720P,1080P,2K等。视频的编码格式可以是H.264(高级视频编码(Advanced Video Coding,AVC)),H.265(高效率视频编码(high efficiency video coding,HEVC))等。
另外,第一终端为多个第二终端中的每个第二终端保存一个连接实例,用于接收来自该第二终端的投屏数据。
如S401中的描述,第一终端是基于获取到的(如用户输入)IP地址与各第二终端建立连接的。示例性的,继续结合图6,第一终端的窗口管理模块可将获得的各第二终端的IP地址传输给第一终端的网络管理模块,由网络管理模块根据获得的IP地址与各第二终端建立连接(如,图6中的步骤4)。在第一终端与各第二终端的连接建立成功后,第一终端,如第一终端的网络管理模块可以在本地维护一个数组,如称为数组2,该数组2中包括与数组1中的IP地址一一对应的连接实例(或者称为实例),用于接收来自对应第二终端的投屏数据。
例如,结合S401中的示例,手机1和手机2作为投屏源端,电视机作为投屏目的端。以手机1的IP地址和手机2的IP地址由用户手动输入为例,电视机在显示配置界面1(如图5中所示的配置界面501)后,用户可在配置界面1中输入手机1的IP地址和手机2的IP地址。电视机的窗口管理模块可从配置界面1中获得手机1的IP地址和手机2的IP地址。在获取到手机1和手机2的IP地址后,电视机可在本地保存一个数组1。该数组1中包括手机1的IP地址和手机2的IP地址。电视机的窗口管理模块可根据该数组1,创建一个view数组。该view数组包括:与数组1中手机1的IP地址对应view,如view 1,用于渲染手机1投射的界面,与数组1中手机2的IP地址对应view,如view 2,用于渲染手机2投射的界面。在电视机的窗口管理模块成功创建与手机1的IP地址对应的view 1后,通过回调函数在解码模块中配置与该手机1的IP地址关联的解码参数,如称为解码参数1。在成功创建与手机2的IP地址对应的view 2后,通过回调函数在解码模块中配置与该手机2的IP地址关联的解码参数,如称为解码参数2。这样,电视机可为手机1和手机2配置不同的解码参数,用于进 行投屏数据的解码。另外,在电视机与手机1和手机2的连接建立成功后,电视机的网络管理模块还可以在本地维护一个数组2。该数组2中包括:与数组1中手机1的IP地址对应的连接实例,如称为连接实例1,用于接收来自手机1的投屏数据,与数组1中手机2的IP地址对应的连接实例,如称为连接实例2,用于接收来自手机2的投屏数据。
S403、手机1获取投屏数据1并发送给电视机。
S404、手机2获取投屏数据2并发送给电视机。
如前述实施例的描述,在第一终端和第二终端连接的情况下,第二终端可作为投屏源端将其显示屏上显示的界面投射到作为投屏目的端的第一终端显示屏上显示。
其中,在无线投屏场景下,第二终端开始投屏的条件,除了包括与第一终端成功建立连接外,还包括接收到对应的用户操作。
例如,该用户操作可以是用户选择开始投屏的操作,如可以是用户对开始投屏按钮的点击操作。该选择开始投屏的操作可以是第二终端在与第一终端建立连接之前接收到的,也可以是在与第一终端建立连接后接收到的。如果该选择开始投屏的操作是第二终端在与第一终端建立连接之前接收到的,则在第二终端与第一终端成功建立连接后,第二终端便可开始进行投屏。如果该选择开始投屏的操作是第二终端在与第一终端建立连接之后接收到的,则在第二终端与第一终端成功建立连接,且第二终端接收到该选择开始投屏的操作后,开始进行投屏。
又例如,该用户操作可以是第二终端与第一终端建立连接的过程中,用户确认投屏的操作。如,在第二终端与第一终端建立连接的过程中,第二终端可显示确认界面,以询问用户是否确认将第二终端显示界面投射到第一终端上显示。该确认投屏的操作可以是用户在该确认界面中对确认投屏按钮的点击操作。之后,在第二终端与第一终端成功建立连接后,第二终端便可开始进行投屏。
在本实施例中,作为一种示例,第二终端将其显示屏上显示的界面投射到第一终端显示屏上的具体实现可以是:第二终端通过获取第二终端当前显示界面(该界面可以为本申请实施例中的第二界面)对应的数据,如投屏数据,并发送给第一终端,用于第一终端在其显示屏上显示对应内容,从而实现第二终端的显示界面在第一终端显示屏上的投射显示。
例如,结合图7和图8,以手机1和手机2作为投屏源端,电视机作为投屏目的端,上述用户操作为无线投屏场景下,用户选择开始投屏的操作,且该操作是在手机1和手机2与电视机建立连接之前执行的为例。
用户想要将手机1和手机2各自显示的界面均投射到电视机上时,用户可分别触发手机1和手机2开始投屏。如,图7中的(a)所示,手机1当前显示界面701,图7中的(b)所示,手机2当前显示界面702。用户可分别触发手机1和手机2显示包括开始投屏按钮的界面,如称为配置界面2,以便可触发手机1和手机2开始投屏。例如,如图8所示,用户可触发手机1显示配置界面801,该配置界面801中包括开始投屏按钮802。用户可对该开始投屏按钮802进行点击操作。手机1接收用户对开始投屏按钮802的点击操作。之后,手机1可获取当前显示界面701对应的数据。如,手机1可通过手机1的显示管理模块(或称为显示管理器,DisplayManager,其可以 是手机1框架层的模块)获取手机1当前显示界面701对应数据,如称为投屏数据1。用户还可以触发手机2显示配置界面2(如,类似于图8中的配置界面801)。手机2接收用户对配置界面2中开始投屏按钮的点击操作后,可获取当前显示界面702对应的数据。如,手机2可通过手机2的显示管理模块(或称为显示管理器,其可以是手机2框架层的模块)获取手机2当前显示界面对应数据,如称为投屏数据2。另,如前述实施例的描述,作为投屏目的端的电视机根据手机1和手机2的IP地址,可与手机1和手机2分别建立连接。在电视机与手机1的连接建立成功后,手机1可将获得的上述投屏数据1发送给电视机,用于实现手机1的显示界面701在电视机显示屏上的投射显示。在电视机与手机2的连接建立成功后,手机2可将获得的上述投屏数据2发送给电视机,用于实现手机2的显示界面在电视机显示屏上的投射显示。
在一些实施例中,可采用分布式多媒体协议(Distributed Multi-media Protocol,DMP)来实现第二终端显示界面到第一终端显示屏上的投射显示。例如,在用户触发第二终端开始投屏后,第二终端可使用第二终端的显示管理模块创建虚拟显示(VirtualDisplay)。之后,第二终端可将第二终端显示屏上显示的界面的绘制移到该VirtualDisplay中。这样,第二终端便可获得对应投屏数据。之后,第二终端可将获得的投屏数据发送给第一终端。如,结合图3,第二终端在获得投屏数据后,可由第二终端的编码模块将投屏数据进行编码后发送第二终端的网络管理模块。第二终端的网络管理模块可通过与第一终端建立的连接,向第一终端发送编码后的投屏数据。
在其他一些实施例中,也可以采用无线投影(Miracast)实现第二终端显示界面在第一终端显示屏上的投射显示,即第二终端可获取第二终端显示界面的所有图层,然后将获得的所有图层整合成视频流(或者说称为投屏数据)。之后,可由第二终端的编码模块对其编码后发送给第二终端的网络管理模块,以便网络管理模块采用实时流传输协议(real time streaming protocol,RTSP)协议,通过与第一终端建立的连接发送给第一终端。
以上实施例是以将第二终端显示屏上显示界面的全部内容投射到第一终端的显示屏上显示为例进行说明的。在其他一些实施例中,也可以将第二终端显示屏上显示界面的部分内容,如界面的部分元素投射到第一终端的显示屏上显示。其中,需要投射到第一终端的元素可以是界面中的预定元素,如视频元素等。在第二终端进行投屏时,可仅将该预定元素所在图层投射到第一终端,而不投射其他图层。这样可以保护第二终端上的隐私信息不被显示到第一终端。
其中,第二终端是否仅投射该预定元素所在图层,可以是***预先定义的。如,当第二终端显示屏上显示的界面中包括预定元素时,第二终端仅将该预定元素所在图层投射到第一终端;当第二终端显示屏上显示的界面中不包括预定元素时,则第二终端将当前界面全部内容投射到第一终端。第二终端是否仅投射预定元素所在图层,也可以是用户设置的。如,继续结合图8,配置界面801中还包括启用图层过滤的选项803(该选项803可以为本申请实施例中的图层过滤设置选项)。当用户在该配置界面801中选中该启用图层过滤的选项803时,第二终端启动图层过滤功能,即第二终端仅将预定元素所在图层投射到第一终端;当用户在该配置界面801中未选中该启用图层过滤的选项803时,第二终端将当前界面全部内容投射到第一终端。
作为一种示例,以采用DMP实现第二终端显示界面到第一终端显示屏上的投射显示,预定元素为视频元素为例,第二终端仅投射预定元素所在图层的具体实现可以包括:在第二终端创建了VirtualDisplay后,第二终端,如第二终端的显示合成(surface Flinger)模块(如可以是第二终端应用层的模块)可逐图层将第二终端显示屏上显示的界面合成到VirtualDisplay中。在逐图层进行合成的过程中,第二终端的surface Flinger模块可判断当前需合成的图层中是否包括视频元素。例如,第二终端可根据图层的图层名称的前缀确定该图层中是否包括视频元素。如视频元素所在图层的图层名称的前缀一般是Surfaceview,因此,第二终端可在确定当前需合成的图层的图层名称的前缀为Surfaceview时,确定该图层中包括视频元素,在确定当前需合成的图层的图层名称的前缀不为Surfaceview时,确定该图层中不包括视频元素。第二终端的surface Flinger模块仅将包括视频元素的图层合成到VirtualDisplay中,不包括视频元素的图层不合成到VirtualDisplay中,以获得对应的投屏数据。其中,该投屏数据中仅包括视频元素所在图层对应的数据,以实现仅将视频元素投射到第一终端的目的。
可以理解的是,在本实施例中,当第二终端当前正在播放声音,如用户使用第二终端观看视频,听音乐时,在第二终端开启投屏后,第二终端不仅可以将当前显示的界面投射到第一终端,还可将音频也投射到第一终端。在该场景下,上述投屏数据(如投屏数据1或投屏数据2)可以包括视频数据和音频数据。其中,视频数据用于第一终端在第一终端的显示屏上显示对应投屏界面,音频数据用于第一终端播放对应声音。视频数据的具体获取过程如上述实施例中采用DMP或无线投影方式实现投屏所描述的过程。音频数据的获取过程可以是:第二终端可预先创建一个音频录音(AudioRecord)对象,并创建一个缓存(buffer)。在用户触发第二终端开始投屏后,第二终端可调用该AudioRecord对象。在该AudioRecord对象被调用后,可对第二终端中的音频数据进行录制,如投射的界面中包括视频组件,则可对视频组件中播放的视频中的音频进行录制,以获得音频数据,该音频数据会被存储到创建的buffer中。之后,第二终端可从buffer中获得音频数据,并发送给第一终端。需要说明的是,在该场景中,可以将视频数据和音频数据均投屏到第一终端,也可以只将视频数据投屏到第一终端,而不将音频数据投屏到第一终端。具体是否投射音频数据可以是***预先定义的,也可以是用户设置的。如,继续结合图8,配置界面801中还包括启用音频的选项804。当用户在该配置界面801中选中该启用音频的选项804时,第二终端将视频数据和音频数据均投屏到第一终端;当用户在该配置界面801中未选中该启用音频的选项804时,第二终端仅将视频数据投射到第一终端。
S405、电视机根据配置的对应解码参数分别对投屏数据1和投屏数据2进行解码。
S406、电视机根据解码后的投屏数据1和投屏数据2,利用创建的对应视图绘制投屏界面1和投屏界面2,并在电视机上显示。
其中,投屏界面1和投屏界面2可以为本申请实施例中的第一界面。
第一终端在接收到来自多个第二终端的投屏数据后,可根据接收到的各投屏数据在第一终端显示屏上显示与多个第二终端一一对应的投屏界面。如,继续结合以上示例,电视机接收到投屏数据1后,可根据该投屏数据1在电视机上显示投屏界面,如称为投屏界面1,该投屏界面1中显示的内容与手机1显示屏上显示界面的全部或部 分内容相同,或者说该投屏界面1中的内容为手机1显示屏上显示界面全部或部分内容的镜像。类似的,电视机接收到投屏数据2后,可根据该投屏数据2在电视机上显示投屏界面,如称为投屏界面2,该投屏界面2中显示的内容与手机2显示屏上显示界面的全部或部分内容相同,或者说该投屏界面2中的内容为手机2显示屏上显示界面全部或部分内容的镜像。
示例性的,结合图3和图6,第一终端根据接收到的第二终端的投屏数据,在第一终端上对应显示投屏界面的具体实现可以是:第一终端的网络管理模块在接收到来自第二终端的投屏数据后,可将该投屏数据发送给第一终端的解码模块进行解码(如,图6中所示的步骤5)。第一终端的解码模块利用对应的解码参数对该投屏数据进行解码后,将其发送给第一终端的窗口管理模块;第一终端的窗口管理模块利用对应的view,根据接收到的投屏数据,可绘制并在第一终端的显示屏上显示对应投屏界面(如,图6中的步骤6)。
例如,结合图3,图6,图7及上述S402的描述,在手机1的网络管理模块通过与电视机建立的连接,向电视机发送编码后的投屏数据1后,电视机的网络管理模块可接收到编码后的该投屏数据1。具体的,电视机的网络管理模块通过在本地维护的数组2中的连接实例1可接收到编码后的该投屏数据1。电视机的网络管理模块根据接收到数据的连接实例1,可确定投屏源端的IP地址为手机1的IP地址。之后,电视机的网络管理模块可将编码后的该投屏数据1和手机1的IP地址发送给电视机的解码模块。电视机的解码模块可根据手机1的IP地址,获取到对应的解码参数,如获得解码参数1,并采用该解码参数1对投屏数据1进行解码。电视机的解码模块可将解码后的投屏数据1发送给电视机的窗口管理模块。电视机的窗口管理模块根据解码后的投屏数据1,利用创建的view数组中与手机1的IP地址对应的view 1,可实现投屏界面1的绘制,并如图7中的(c)所示,在电视机的显示屏上显示投屏界面1。其中投屏界面1中的内容与图7中的(a)中手机1显示的界面701中的内容相同。类似的,在手机2的网络管理模块通过与电视机建立的连接,向电视机发送编码后的投屏数据2后,电视机的网络管理模块通过在本地维护的数组2中的连接实例2可接收到编码后的该投屏数据2。电视机的网络管理模块根据接收到数据的连接实例2,可确定投屏源端的IP地址为手机2的IP地址。之后,电视机的网络管理模块可将编码后的该投屏数据2和手机2的IP地址发送给电视机的解码模块。电视机的解码模块可根据手机2的IP地址,获取到对应的解码参数,如获得解码参数2,并采用该解码参数2对投屏数据2进行解码。电视机的解码模块可将解码后的投屏数据2发送给电视机的窗口管理模块。电视机的窗口管理模块根据解码后的投屏数据2,利用创建的view数组中与手机2的IP地址对应的view 2,可实现投屏界面2的绘制,并如图7中的(c)所示,在电视机的显示屏上显示投屏界面2。其中投屏界面2中的内容与图7中的(b)中手机2显示的界面702中的内容相同。
另外,在本实施例中,第一终端用于显示投屏界面的窗口可以称为投屏窗口。例如,结合图7中的(c)所示,用于显示投屏界面1的窗口可以称为投屏窗口1,用于显示投屏界面2的窗口可以称为投屏窗口2。
其中,在第一终端的投屏服务功能开启的情况下,第一终端可在确定与第二终端 (如上述手机1或手机2)连接后,显示对应的投屏窗口。第一终端可根据作为投屏源端的第二终端的数量和第一终端显示屏的尺寸,设置与各第二终端对应投屏窗口的大小和布局。如,作为投屏源端的第二终端的数量为两个。第一终端在与这两个第二终端连接后,可以在第一终端的显示屏上显示与这两个第二终端分别对应的投屏窗口。这两个投屏窗口在第一终端显示屏上的可垂直排列,也可以水平排列。这两个投屏窗口的大小可以相同,也可以不同。例如,如图7中的(c)所示,与手机1对应的投屏窗口1和与手机2对应的投屏窗口2垂直排列,且投屏窗口1和投屏窗口2的大小相同。其中,这两个投屏窗口可以是同时显示在第一终端显示屏的,也可以按照对应第二终端开始投屏的顺序,或者说第一终端接收到对应第二终端的投屏数据的顺序先后显示在第一终端的显示屏。对于按照对应第二终端开始投屏的顺序先后显示对应投屏窗口的情况,先显示的投屏窗口的大小可以与第一终端显示屏的大小相同,后显示的投屏窗口可以小于第一终端的显示屏,并以悬浮窗的形式显示在先显示的投屏窗口之上。
第一终端(如第一终端的窗口管理模块)在显示与多个第二终端一一对应的投屏界面的情况下,可根据用户的操作(该操作可以为本申请实施例中的第一操作),对对应的投屏窗口进行缩小,放大,切换焦点窗口,关闭等处理。其中,该操作可以是用户在第一终端的屏幕上的触摸操作,也可以是用户使用第一终端的输入设备(如PC的鼠标,键盘;又如,电视机的遥控器)输入的操作。
例如,继续结合图7,以电视机上显示了投屏界面1和投屏界面2,用于显示投屏界面1的窗口为投屏窗口1,用于显示投屏界面2的窗口为投屏窗口2为例。如图9所示,用户可使用电视机的遥控器对电视机当前显示的界面进行控制。
电视机接收到用户的控制操作(如,图9中的步骤1)后,可根据接收到的控制操作判断是否需要切换焦点窗口(如,图9中的步骤2)。其中,如果该控制操作是切换焦点窗口的操作,则确定需要切换焦点窗口。如,切换焦点窗口的操作可以是用户对遥控器的左按键或右按键的操作。也就是说,如果电视机接收到的控制操作为对遥控器左按键或右按键的操作,则电视机可确定需要切换焦点窗口,电视机可切换焦点(如,图9中的步骤3)。如电视机可在本地保存一个焦点窗口变量,该焦点窗口变量用于指示当前显示的多个投屏窗口中哪个窗口为焦点窗口。电视机切换焦点的操作可以包括,电视机将该焦点窗口变量由标识1更新为标识2。其中,标识1为切换焦点前作为焦点窗口的投屏窗口的标识,标识2为切换焦点后作为焦点窗口的投屏窗口的标识。例如,如图10所示,在电视机显示了投屏界面1和投屏界面2后,可默认其中一个投屏界面的投屏窗口为焦点窗口,如电视机默认用于显示投屏界面1的投屏窗口1为焦点窗口。如图10所示,电视机可显示提示标识1001,用于向用户提示当前投屏窗口1为焦点窗口。电视机还可将焦点窗口变量设置为投屏窗口1的标识,用于指示投屏窗口1为焦点窗口。之后,电视机接收到用户对遥控器右按键的操作,电视机可确定需要切换焦点窗口,则电视机将焦点窗口变量由标识1更新为投屏窗口2的标识2,用于指示投屏窗口2为当前的焦点窗口。另外,如图11所示,电视机可更新电视机显示屏上的提示标识1001的位置,即由投屏窗口1的位置滑动到投屏窗口2的位置,以向用户提示当前投屏窗口2为焦点窗口。
如果电视机确定不需要切换焦点窗口,则电视机可根据接收到的控制操作并结合当前焦点窗口的大小判断是否需要放大当前焦点窗口(如,图9中的步骤4)。其中,如果该控制操作是对焦点窗口的选择操作,如该选择操作可以是对遥控器确定按键的操作,且当前焦点窗口不是最大化窗口时,电视机可放大当前焦点窗口。对于其他的非焦点窗口,电视机可将其隐藏(如,图9中的步骤5)。可以理解的是,投屏界面的大小随投屏窗口大小的改变而改变。投屏界面也随着投屏窗口的隐藏而隐藏。例如,继续结合图10,当前焦点窗口为投屏窗口1。电视机接收到用户对遥控器确定按键的操作,且电视机确定当前的焦点窗口,即投屏窗口1不是最大化窗口,则电视机可将投屏窗口1最大化,并隐藏其他投屏窗口(即隐藏投屏窗口2)。作为一种示例,放大窗口时,电视机可根据电视机的显示屏尺寸确定当前焦点窗口放大后的尺寸,如放大后的尺寸与电视机显示屏的尺寸相同。
如果电视机确定不需要放大当前焦点窗口,则电视机可根据接收到的控制操作并结合当前焦点窗口的大小,判断是否需要缩小当前焦点窗口(如,图9中的步骤6)。其中,如果该控制操作是对遥控器确定按键的操作,且当前焦点窗口是最大化窗口时,电视机可缩小当前焦点窗口,并显示其他的非焦点窗口(如,图9中的步骤7)。例如,电视机当前显示最大化的投屏窗口1,投屏窗口2被隐藏。电视机接收到用户对遥控器确定按键的操作,且电视机确定当前的焦点窗口,即投屏窗口1是最大化窗口,则如图10所示,电视机可缩小投屏窗口1,并显示被隐藏的其他投屏窗口,即显示投屏窗口2。作为一种示例,缩小窗口时,电视机可根据电视机的显示屏尺寸和被隐藏的其他投屏窗口的数量确定当前焦点窗口缩小后的尺寸,如缩小后的尺寸和被隐藏的其他投屏窗口的尺寸相同,且所有投屏窗口的尺寸之和与电视机显示屏的尺寸相同。
如果电视机确定不需要缩小当前焦点窗口,则电视机可根据接收到的控制操作,更新当前焦点窗口中的投屏界面(如,图9中的步骤8)。可以理解的是,如果接收到的控制操作不是用来切换焦点窗口,不是用来放大和缩小投屏窗口的,该控制操作可能是用于操作投屏界面的操作(该操作可以为本申请实施例中的第二操作)。那么,电视机可将该控制操作发送给与当前焦点窗口对应的投屏源端,以便于投屏源端根据接收到的控制操作,执行对应的事件,并更新投屏源端显示的界面(投屏源端更新后的界面可以为本申请实施例中的第三界面)。之后,投屏源端可将更新后的界面投射到投屏目的端,如电视机,即投屏源端可获取新的投屏数据,并发送给电视机。电视机接收到更新后的投屏数据后,可根据该新的投屏数据更新当前焦点窗口中的投屏界面(电视机更新后的投屏界面可以为本申请实施例中的第四界面)。
如,当前焦点窗口为投屏窗口1。该投屏窗口1中的投屏界面1的内容为PPT。如果接收到的控制操作是对遥控器上按键或下按键的操作,电视机可将该对遥控器上按键或下按键的操作,发送给与投屏窗口1对应的手机1。手机1接收到该操作后,可根据该操作对PPT执行上翻页或下翻页的操作,并可获取新的投屏数据发送给电视机。电视机接收到该新的投屏数据后,可根据该新的投屏数据更新显示投屏窗口1中的投屏界面1。需要说明的是,手机1获取并发送新的投屏数据,电视机接收新的投屏数据并根据新的投屏数据显示投屏界面的具体实现与上述实施例中,S403-S406中对应过程的实现类似,此处不再详细赘述。当然,用于操作投屏界面的控制操作也可 能是其他的操作,如对投屏界面中某可操作元素的操作。如果该控制操作是对投屏界面中某可操作元素的操作,则电视机不仅可将该操作发送给对应的投屏源端,还可将该操作在投屏界面中的操作位置发送给投屏源端。投屏源端根据该操作位置可确定出用户操作的是当前显示界面中的哪个元素,进而根据接收到的操作和确定出的***作的元素执行对应的事件,并更新投屏源端显示的界面。
另外,第一终端还可根据作为投屏源端的第二终端的数量,动态调整第一终端显示的与各第二终端对应投屏窗口的大小和排列布局。其中,作为投屏源端的第二终端的数量可以动态的增加或减少。如,第一终端与多个第二终端建立了连接,第一终端当前显示这多个终端分别对应的投屏窗口。当第一终端与其中一个第二终端断开连接,或者第一终端接收到了用户关闭某投屏窗口的操作(如当某投屏窗口为焦点窗口时,电视机接收到了用户对遥控器的返回键的操作),即作为投屏源端的第二终端的数量减少,则第一终端可停止显示断开连接的第二终端对应的投屏窗口,并根据剩余连接的第二终端的数量,调整各第二终端对应投屏窗口的大小和排列布局。当有新的第二终端与第一终端建立了连接,并开启了投屏,即作为投屏源端的第二终端的数量增加,则第一终端可增加显示与该新的第二终端对应的投屏窗口,并根据当前作为投屏源端的第二终端的数量,调整各第二终端对应投屏窗口的大小和排列布局。
其中,以上实施例中的示例,是以无线投屏场景下,实现多对一的投屏为例进行说明的。在其他一些实施例中,本实施例中的多对一的投屏方法也可以应用于跨设备拖拽的场景中。在跨设备拖拽的场景中,具体实现多对一投屏的过程与上述S401-S406中的实现类似,区别在于:
1、第一终端,如电视机创建视图和配置解码参数的时机可以是与对应第二终端,如手机1和手机2的连接建立成功后便执行的,也可以是第一终端确定对应第二终端将开始投屏后执行的。如,在跨设备拖拽的场景中,当第二终端确定用户触发跨设备拖拽时,可向第一终端发送对应的拖拽数据。该拖拽数据中可用于指示该拖拽数据是拖拽开始事件中相关数据的指示。该指示可以标识拖拽开始。第一终端根据该指示可确定第二终端将开始投屏。之后,电视机可创建该第二终端对应的视图,配置该第二终端对应的解码参数。
2、在跨设备拖拽的场景中,第二终端开始投屏的条件除了包括与第一终端成功建立连接外,还包括确定用户的拖拽意图是跨设备拖拽。其中,用户拖拽的对象可以是第二终端显示的界面,或界面中的元素(如视频元素、画中画或悬浮窗)。
示例性的,在第二终端的显示界面或显示界面中的元素被用户拖拽的过程中,或者说第二终端接收到用户的拖拽操作后,第二终端可判断用户拖拽的意图是否是跨设备拖拽,如果确定用户拖拽该元素的意图是跨设备拖拽,则可开始进行投屏。例如,第二终端可设置拖拽感知区域来确定用户的拖拽意图是否是跨设备拖拽。拖拽感知区域可以是第二终端显示屏上距离显示屏边缘预定距离的区域。该预定距离可以是预先定义的,也可以提供设置界面供用户设置。第二终端的拖拽感知区域可以是一个,也可以是多个。拖拽感知区域处设置有透明的视图(view)控件。在被拖拽的对象,如界面或界面中的元素被拖入拖拽感知区域后,设置在对应区域的视图控件可监测到元素的拖入。在存在视图控件监测到元素拖入时,第二终端便可确定用户的拖拽意图是 跨设备拖拽。
3、在跨设备拖拽的场景中,如果第二终端拖拽的是其显示的界面,则第二终端可将该显示界面(该显示界面可以为本申请实施例中的第二界面)投射到第一终端。具体实现与S403和S404中无线投屏场景下第二终端投射显示界面到第一终端的实现类似,具体描述可参考S403和S404中对应内容的描述,此处不在详细赘述。如果第二终端拖拽的是其显示界面(该显示界面可以为本申请实施例中的第二界面)中的元素,则第二终端可仅将该元素投射到第一终端。例如,在第二终端接收到用户拖拽当前界面中的元素后,第二终端可获取该元素在当前显示界面中的层名称(或者说图层名称,layer Name)。在第二终端开始投屏后,在逐图层进行合成的过程中,第二终端可判断当前需合成的图层的层名称是否与获取到的层名称相同。如果相同,则第二终端将该图层合成到VirtualDisplay中。如果不相同,则第二终端不将该图层合成到VirtualDisplay中,以实现仅将被用户拖拽的元素投射到第一终端的目的。
4、在跨设备拖拽的场景中,为了能够提高用户跨设备拖拽时的跟手体验,第二终端可以在接收到用户的拖拽释放操作后,在第一终端显示被拖拽的对象。可以理解的是,在用户拖拽的过程中,会存在被拖拽的对象的部分区域显示在第二终端的显示屏上,另一部分区域被隐藏(或者说溢出显示屏)的情况。为了给用户该对象从第二终端拖拽到第一终端的视觉效果,在该对象被拖拽的过程中,如果该对象的部分区域溢出显示屏,则可在第一终端和第二终端上同时显示该对象。具体的:被拖拽的对象,一部分区域显示在第二终端上,另一部分区域(溢出第二终端的区域)显示在第一终端上。
作为一种示例,在拖拽过程中,实现在第一终端和第二终端上同时显示被拖拽的对象的具体实现可以是:在开始投屏后,第二终端不仅需向第一终端发送投屏数据,还需向第一终端发送被拖拽对象的矩形(rect)信息,及在拖拽过程中该对象某个角(如左上角,左下角,右上角和右下角中的任意一个角)的坐标信息,也就是说,第二终端向第一终端发送的数据包括投屏数据,被拖拽对象的矩形信息,及在拖拽过程中该对象某个角的坐标信息。其中,该对象的矩形信息包括开始拖拽时该对象的左上角、右上角、左下角和右下角四个角的坐标信息。这样,第一终端根据对象的矩形信息,拖拽过程中该对象某个角的坐标信息及第二终端的分辨率,可判断该对象是否存在区域溢出第二终端的显示屏。如果该对象存在区域溢出第二终端的显示屏时,则第一终端可根据该对象的矩形信息,拖拽过程中该对象某个角的坐标信息及第二终端的分辨率确定能够在第一终端的显示屏上对应显示的该对象的区域的信息(该区域与对象溢出第二终端显示屏的区域内容相同)。其中,第二终端的分辨率可以是第一终端在与第二终端建立连接的过程中,或连接建立成功后第二终端发送给第一终端的。第一终端根据确定出的区域的信息和投屏数据,可将该对象对应区域的内容在第一终端的显示屏上显示。
例如,结合图12-图14,以手机1和手机2作为投屏源端,电视机作为投屏目的端为例,对跨设备拖拽场景下,多对一投屏的实现过程进行举例介绍。
电视机获取手机1的IP地址1,与手机1建立连接。电视机创建与IP地址1对应的视图,如称为视图a。电视机配置与IP地址1关联的解码参数,如称为解码参数a。 电视机保存与IP地址1对应的连接实例a,用于接收来自手机1的投屏数据。
如图12中的(a)所示,用户打开手机1的视频应用播放视频X。手机1接收到用户触发用于呈现该视频X的视频元素1201拖起的操作。如图12中的(b)所示,响应于该操作,手机1可将该视频元素1201拖起,还可进行背景虚化处理。之后,手机1接收到用户对拖起的视频元素1201的拖拽操作。手机1响应该拖拽操作,使视频元素1201在手机1显示屏上跟随用户手指的移动而移动,给用户以视频元素1201被用户手指拖动的视觉效果。其中,视频元素1201的拖拽方向可以是向上拖动,向左拖动,向右拖动,向下拖动。例如,如图12中的(c)所示,用户可使用手指对拖起后的视频元素1201执行拖拽操作,如长按并向右移动手指的操作。随着用户手指的移动,手机可绘制并显示视频元素1201随用户手指移动的动画。在视频元素1201被拖拽的过程中,手机1可判断用户的拖拽意图是否是跨设备操作。在手机1确定用户的拖拽意图是跨设备操作后,手机1可创建虚拟显示,并将当前界面中该视频元素1201所在图层绘制到该虚拟显示上,以获得投屏数据,如称为投屏数据a。手机1可将该投屏数据a进行编码后发送给电视机。手机1还可将该视频元素1201的矩形信息,及在拖拽过程中该视频元素1201某个角(如左上角)的坐标信息发送给电视机。
电视机可通过连接实例a接收到编码后的投屏数据a,视频元素1201的矩形信息,及在拖拽过程中视频元素1201左上角的坐标信息。电视机根据接收到的视频元素1201的矩形信息,在拖拽过程中视频元素1201左上角的坐标信息和手机1的分辨率,在确定视频元素1201存在区域溢出手机1的显示屏后,电视机可根据该视频元素1201的矩形信息,拖拽过程中视频元素1201左上角的坐标信息及手机1的分辨率确定能够在电视机的显示屏上对应显示的该视频元素1201的区域的信息。
另外,电视机根据接收到数据的连接实例a,可确定投屏源端的IP地址为手机1的IP地址1。电视机可根据IP地址1,采用与IP地址1对应的编码参数a,对接收到的投屏数据a进行解码。之后,电视机根据解码后的投屏数据a和确定出的能够在电视机的显示屏上对应显示的视频元素1201的区域的信息,利用创建与IP地址1对应的视图a,可实现投屏界面1的绘制。如图13中的(a)所示,在电视机的显示屏上显示投屏界面1,该投屏界面1中的内容与手机1的视频元素1201中所承载视频X溢出手机显示屏的内容相同。在用户拖拽视频元素1201的过程中,手机1可实时获取投屏数据a和拖拽过程中视频元素1201左上角的坐标信息,并发送给电视机。这样,电视机可根据接收到的数据实时更新投屏界面1。在用户释放拖拽后,电视机可根据实时接收到的投屏数据a,在电视机的显示屏上全屏显示投屏界面1。如图13中的(b)所示,此时,投屏界面1中的内容与视频元素1201中所承载视频X的全部内容相同。
类似的,电视机获取手机2的IP地址2,与手机2建立连接。电视机创建与IP地址2对应的视图,如称为视图b。电视机配置与IP地址2关联的解码参数,如称为解码参数b。电视机保存与IP地址2对应的连接实例b,用于接收来自手机2的投屏数据。
如图14所示,用户打开手机2的健身应用查看健身视频。手机2接收到用户对承载该健身视频的视频元素的拖拽操作。手机2响应该拖拽操作,使该视频元素在手机2显示屏上跟随用户手指的移动而移动,给用户以视频元素被用户手指拖动的视觉效 果。在视频元素被拖拽的过程中,手机2可判断用户的拖拽意图是否是跨设备操作。在手机2确定用户的拖拽意图是跨设备操作后,手机2可创建虚拟显示,并将当前界面中该视频元素所在图层绘制到该虚拟显示上,以获得投屏数据,如称为投屏数据b。手机2可将该投屏数据b进行编码后发送给电视机。手机2还可将该视频元素的矩形信息,及在拖拽过程中该视频元素某个角(如左上角)的坐标信息发送给电视机。电视机可通过连接实例b接收到编码后的投屏数据b,视频元素的矩形信息,及在拖拽过程中视频元素左上角的坐标信息。电视机根据接收到的视频元素的矩形信息,在拖拽过程中视频元素左上角的坐标信息和手机2的分辨率,在确定视频元素存在区域溢出手机2的显示屏后,电视机可根据该视频元素的矩形信息,拖拽过程中视频元素左上角的坐标信息及手机2的分辨率确定能够在电视机的显示屏上对应显示的该视频元素的区域的信息。
另外,电视机根据接收到数据的连接实例b,可确定投屏源端的IP地址为手机2的IP地址2。电视机可根据IP地址2,采用与IP地址2对应的编码参数b,对接收到的投屏数据b进行解码。之后,电视机根据解码后的投屏数据b和确定出的能够在电视机的显示屏上对应显示的视频元素的区域的信息,利用创建与IP地址2对应的视图b,可实现投屏界面2的绘制。电视机此时可同时将投屏界面1和投屏界面2显示在电视机显示屏上。如,电视机上当前全屏显示有投屏界面1。在一些实施例中,如图13中的(c)所示,电视机可在电视机的显示屏上以小窗口(或者说画中画,悬浮窗)的形式显示投屏界面2,该投屏界面2中的内容与手机2的健身视频溢出手机显示屏的内容相同。在用户拖拽视频元素的过程中,手机2可实时获取投屏数据b和拖拽过程中视频元素左上角的坐标信息,并发送给电视机。这样,电视机可根据接收到的数据实时更新投屏界面2。在用户释放拖拽后,电视机可根据实时接收到的投屏数据b,在电视机的显示屏上继续以小窗口的形式显示投屏界面2。如图13中的(d)所示,此时,投屏界面2中的内容与手机2显示的健身视频的全部内容相同。
另外,如上述实施例的描述,在电视机显示多个投屏界面的情况下,电视机可默认设置其中一个投屏界面的投屏窗口为焦点窗口,如电视机默认小窗口为焦点窗口。如图13中的(d)所示,电视机显示提示标识1301,用于向用户提示小窗口,即投屏界面2的投屏窗口为焦点窗口。用户可使用电视机的遥控器选择切换焦点窗口,还可进行大小窗布局的切换(其中用于全屏显示投屏界面1的窗口可以称为大窗口),还可关闭大小窗口。如电视机接收到用户对遥控器的左按键或右按键的操作,切换焦点窗口。当焦点窗口为小窗口时,如果电视机接收到用户对遥控器的确认按键的操作,如图13中的(e)所示,电视机可将小窗口,即投屏界面2全屏显示,将大窗口,即投屏界面1以小窗口的形式显示。如果电视机接收到用户对遥控器的返回键的操作,则电视机可停止显示小窗口,或者说关闭小窗口,电视机还可通知该小窗口对应的手机2停止投屏。如果用户继续接收到用户对遥控器的返回键的操作,电视机可以停止显示大窗口,电视机还可通知该大窗口对应的手机1停止投屏。
以上示例是以跨设备拖拽场景中,用户拖拽的对象是第二终端显示的界面,或界面中如视频元素、画中画或悬浮窗等元素为例进行说明的。在其他一些实施例中,用户拖拽的对象还可以是第二终端显示的界面中的UI控件。被拖拽的UI控件可以是三 方应用定义的,也可以是用户选择的,还可以是***推荐的。在拖拽对象是界面中的UI控件的场景下,具体实现多对一投屏的过程与拖拽对象是界面或界面中元素的实现类似,区别在于:
1、第二终端不是获取投屏数据发送给第一终端,用于实现投屏。而是第二终端在启动投屏后,获取数据,如当前界面的指令流,并将指令流发送给第一终端。另外,第二终端还可将被拖拽的UI控件的标识(也就是说上述数据还可以包括被拖拽的UI控件的标识)发送给第一终端。这样,第一终端根据接收到的被拖拽的UI控件的标识,可从接收到的指令流中抽取出被拖拽的UI控件的画布(canvas)指令,以根据该canvas指令实现被拖拽的UI控件在第一终端上的显示。从而实现第二终端当前显示的界面(该界面可以为本申请实施例中的第二界面)中UI控件在第一终端上的投屏。第一终端上显示的UI控件可以为本申请实施例中的第一界面。其中,结合图3,第一终端和第二终端还可以包括指令管理模块。第二终端的指令管理模块可负责投屏源端界面内容的提取,即负责获取当前界面的指令流。第一终端的指令管理模块可负责对投屏源端内容的还原,如,根据指令流绘制对应的UI控件。或者,第二终端在启动投屏后,获取数据,如被拖拽的UI控件的2D绘制指令及标识,并发送给第一终端。第一终端根据接收到的2D绘制指令和标识,并依据对应的布局文件,绘制被拖拽的UI控件到第一终端的显示屏,即实现第二终端显示的界面中被用户拖拽的UI控件在第一终端上的显示。其中,UI控件的标识可以是应用开发者在布局文件中写入的特定字段标识,如dupID=xxx。布局文件中还包括绘制区域的其他配置(如与UI控件的标识对应的位置及样式等配置)。第一终端在布局时,根据接收到的2D绘制指令和标识,从布局文件中读取标识对应配置来实现在第一终端的显示屏上UI控件的绘制和布局。
2、可以理解的是,在上述实施例中,用于实现第二终端在第一终端投屏的数据(如上述投屏数据),可以理解为视频数据,或者说包括视频数据,因此可以将第一终端和第二终端间用于传输投屏数据的通道称为视频通道,或者说视频传输通道。在跨设备拖拽UI控件的场景中,用于实现第二终端在第一终端投屏的数据为指令流。在一些实施例中,可以继续采用上述视频通道实现指令流的传输。在其他一些实施例中,也可以采用指令通道,或称为指令传输通道实现指令流的传输。也就是说,在本实施例中,可支持多路指令流投射到一个投屏目的端,如第一终端的屏幕上,实现多对一的投屏。
3、在跨设备拖拽UI控件的场景中采用指令流实现投屏的情况下,区别于S402中创建的视图,第一终端可创建与各第二终端对应的画布(canvas)(该画布可以为本申请实施例中的绘制组件),用于实现第二终端的UI控件在第一终端上的投射。例如,参考图15,第一终端实现多路指令流投射到一个屏幕上的过程可以包括:在第二终端与第一终端连接后,或第二终端与第一终端连接并开始投屏后,第一终端创建与第二终端对应的画布,用于承载(或绘制)该第二终端投射的UI控件(如,图15中的步骤1)。第一终端根据来自各第二终端的指令流和被拖拽UI控件的标识,在对应画布上分别绘制对应内容(如,图15中的步骤2)。第一终端将与各第二终端对应的画布合成为一个画布(如,图15中的步骤3)。第一终端将合成后的画布显示在第一终端的屏幕上(如,图15中的步骤4)。
可以理解的是,请参考图16,当只有一个第二终端作为投屏源端时,第一终端的屏幕上只显示与该第二终端对应的画布(如图16中的(a)中的画布1)的内容。当有两个第二终端作为投屏源端时,这两个第二终端对应画布可按照对应布局显示在第一终端的屏幕上。如,第一终端的屏幕划分为两个区域,其中一个区域用于显示与其中一个第二终端对应的画布(如图16中的(b)中的画布1)的内容,另一个区域用于显示与另一个第二终端对应画布(如图16中的(b)中的画布2)的内容。当有两个以上的第二终端作为投屏源端时,这多个第二终端对应画布可按照对应布局显示在第一终端的屏幕上,如,第一终端的屏幕可划分为对应数量的区域,分别用于显示各第二终端对应画布的内容。需要说明的是,多个画布在第一终端屏幕上的布局可以是预定的,也可以根据用户的设置进行设定,如多个画布以水平等分,垂直等分,画中画,三等分,四等分等方式布局在屏幕上,不限于图16中的(b)所示的水平等分的方式布局。
例如,结合图17-图20,以手机1和手机2作为投屏源端,电视机作为投屏目的端,被拖拽的UI控件是用户选择的,投屏源端通过将当前界面的指令流和被拖拽的UI控件的标识发送给第一终端实现UI控件投屏为例,对跨设备拖拽UI控件场景下,多对一投屏的实现过程进行举例介绍。
电视机的投屏服务功能启动后,可启动网络监听,以监听连接请求。电视机还可广播自身的IP地址,用于其他设备发起连接请求。如手机1接收到电视机的IP地址。手机1可根据电视机的IP地址发起连接请求,以请求与电视机建立连接。在连接建立的过程中,电视机可获得手机1的IP地址1。在电视机与手机1连接建立后,电视机可启动分布功能,如可创建与IP地址1对应的画布,如称为画布x,配置与IP地址1关联的解码参数,如称为解码参数x,保存与IP地址1对应的连接实例x,用于接收来自手机1的数据,如指令流,被拖拽的UI控件的标识等,以为手机1的投屏做好准备。可选的,在做好准备后,电视机还可通知手机1已准备就绪。
对于手机1而言,用户可通过对手机1当前显示界面中的UI控件进行拖拽,以触发手机1开始投屏。如图17中的(a)所示,手机1显示购物应用的购物详情页1701。手机1接收用户对购物详情页1701中UI控件的拖拽操作。如该拖拽操作可以包括:用户选中UI控件的操作和触发选中的UI控件移动的操作。以被拖拽的UI控件包括:购物详情页1701中的商品预览控件1702,商品价格控件1703,商品简介控件1704,加入购物车按钮1705和立即购买按钮1706为例。如图17中的(b)所示,响应于该拖拽操作,手机1可显示对应UI控件随用户手指的移动而移动的动画,给用户以UI控件被用户手指拖动的视觉效果。在UI控件被拖拽的过程中,手机1可判断用户的拖拽意图是否是跨设备操作。在手机1确定用户的拖拽意图是跨设备操作后,手机1可启动指令抓取,如手机1可对购物详情页1701进行指令抽取,以获得该购物详情页1701对应的指令流,如称为指令流x。其中指令流x中可包括当前界面中各UI控件的canvas指令,层名称,控件的标识等信息。手机1可将该指令流x进行编码后发送给电视机。手机1还可将被拖拽的UI控件的标识发送给电视机。其中,控件的标识可以是应用开发者定义的特定字段标识(如,dup ID)。
其中,手机1通过UI控件识别可识别出被用户拖拽的UI控件的类型。根据识别 出的UI控件的类型手机1可确定被拖拽的UI控件的标识。其中,控件的类型与标识一一对应,且该对应关系预先存储在手机1中。示例性的,可采用人工智能(artificial intelligence)识别的方法来识别出被用户拖拽的UI控件的类型。例如,可预先获取手机中各应用的各界面(如包括上述商品详情页1701),如可通过截屏的方法获取商品详情页1701的整帧图像数据,并采用机器学习中的目标检测技术(如R-CNN,Fast-R-CNN YOLO等模型算法)定位出该商品详情页1701中的各UI控件的区域,然后将定位出的该商品详情页1701中的各UI控件的区域和类型与该商品详情页1701的标识对应存储在手机1中。在接收到用户拖拽该商品详情页1701中UI控件的操作后,手机根据用户选中UI控件时触摸的位置,及存储的该商品详情页1701中的各UI控件的区域,可识别出用户拖拽的UI控件的类型。又例如,在接收到用户拖拽该商品详情页1701中UI控件的操作后,可将用户选中的UI控件绘制出来,然后采用机器学习中的目标分类技术(如ResNet模型算法),可识别出绘制出的UI控件的类型。
电视机可通过连接实例x接收到编码后的指令流x,及被拖拽的UI控件的标识。另外,电视机根据接收到数据的连接实例x,可确定投屏源端的IP地址为手机1的IP地址1。电视机可根据IP地址1,采用与IP地址1对应的编码参数x,对接收到的指令流x进行解码。之后,电视机根据解码后的指令流x和被拖拽的UI控件的标识,利用创建与IP地址1对应的画布x,可实现被拖拽的UI控件在电视机屏幕上的绘制及显示。如,在用户释放拖拽后,如图18中的(a)所示,电视机可显示投屏界面x。该投屏界面x中的内容与手机1显示的商品详情页1701中的用户拖拽的UI控件相同。其中,电视机在画布上实现UI控件的绘制时可根据预先配置的布局文件来绘制各UI控件。该布局文件中包括各UI控件绘制区域的配置(如包括UI控件的标识,位置及样式等配置),各UI控件的绘制区域不重叠。另外,该布局文件中各UI控件的绘制区域可与对应UI控件在原界面中的区域不对应,也就是说,通过该布局文件,可实现UI控件的重新布局。该布局文件可以是***开发人员或应用开发人员使用安卓
Figure PCTCN2021135158-appb-000008
studio生成的。如使用安卓
Figure PCTCN2021135158-appb-000009
studio可实现UI控件相关布局的抓取及预览显示,***开发人员或应用开发人员可在预览中调整UI控件的布局,可根据最终布局生成布局文件。
类似的,用户可将手机2上显示的界面中UI控件通过拖拽的方式投射到电视机上显示。具体实现与手机1显示界面中UI控件投射到电视机上显示类似,此处不再一一赘述。例如,如图19所示,手机2显示购物应用的购物详情页1901。用户对购物详情页1901中的UI控件执行拖拽操作。如被拖拽的UI控件包括:购物详情页1901中的商品预览控件1902,商品价格控件1903,商品简介控件1904,加入购物车按钮1905和立即购买按钮1906。在拖拽释放后,电视机可采用对应的编码参数(如编码参数y),对接收到的指令流(如指令流y)进行解码。电视机根据解码后的指令流y和被拖拽的UI控件的标识,利用创建的对应画布(如画布y),可实现手机2上被拖拽的UI控件的绘制。可以理解的是,电视机还在画布x上绘制了手机1上被拖拽的UI控件。之后,电视机可将画布x和画布y合成为一个画布后,显示在电视机屏幕上。如,如图18中的(b)所示,电视机可显示投屏界面x和投屏界面y。其中,投屏界面x中的内容与手机1显示的商品详情页1701中的用户拖拽的UI控件相同,投屏界面y中 的内容与手机2显示的商品详情页1901中的用户拖拽的UI控件相同。
如上述实施例的描述,在电视机显示多个投屏界面的情况下,电视机可默认设置其中一个投屏界面的投屏窗口为焦点窗口。在该实施例中,进一步的,焦点位置具体可以是投屏窗口所呈现的投屏界面中某UI控件。如继续结合图18,如图18中的(b)所示,电视机焦点位置为投屏界面x的商品预览控件1801。用户可使用电视机的遥控器选择切换焦点位置。如,如果电视机接收到用户对遥控器的左按键,右按键,上按键或下按键的操作,则可切换焦点位置。例如,结合图18中的(b),电视机接收到用户对遥控器的右按键的操作,则如图18中的(c)所示,电视机将焦点位置从投屏界面x的商品预览控件1801,切换到投屏界面y的商品预览控件1802。之后,电视机接收到用户对遥控器的下按键的操作,则如图18中的(d)所示,电视机将焦点位置从投屏界面y的商品预览控件1802,切换到投屏界面y的加入购物车按钮1803。
用户还可以使用电视机的遥控器实现反控。例如,电视机接收到用户使用遥控器对某可操作性UI控件的操作时,电视机可获取该操作的位置信息。电视机根据该位置信息及被拖拽UI控件在电视机上的布局位置,可确定出该操作的位置(如坐标)对应在手机界面中的原始位置(如坐标),从而确定出用户想要操作的是手机上的哪个UI控件。之后,电视机可将对应的操作指令发送给手机,用于手机进行相应的响应,进而实现反控。如果该响应引起了手机界面内容的变化,则手机可将更新后的界面内容重新投屏到电视机上,以便电视机更新对应投屏界面。如,结合图18中的(b),焦点位置为投屏界面x的商品预览控件1801。电视机接收到用户对遥控器的确认按钮的操作。电视机根据当前焦点位置及布局,可确定用户想要操作手机1上的商品预览控件。电视机可将对应的操作指令发送给手机1。手机1接收到该操作指令后,可根据该操作指令,进行相应的响应,如播放商品预览视频。手机1还可将播放的该视频进行录屏后发送给电视机。如图18中的(e)所示,电视机可全屏播放该商品的预览视频。
电视机在接收到用户使用遥控器的操作后,如果对该操作的响应引起了手机界面内容的变化,手机也可不将更新后的界面投射到电视机。用户可在手机上继续进行操作。例如,结合图20,如图20中的(a)所示,焦点位置为投屏界面x的立即购买按钮2001。电视机接收到用户对遥控器的确认按钮的操作。电视机根据当前焦点位置及存储的对应关系,可确定用户想要操作手机1上的立即购买控件。电视机可将对应的操作指令发送给手机1。如图20中的(b)所示,手机1接收到该操作指令后,可显示购买界面2002。用户可在手机1上继续进行操作。另外,如图20中的(c)所示,电视机还可将手机1对应的投屏界面x置为灰色,还可显示提示信息2003,如包括“在手机端继续操作”字样,以提示用户可继续在手机端进行操作。之后,如果用户触发手机1的购买界面2002退出前台显示,或操作了电视机的返回按键,可切换回电视机继续操作。如,提示信息2003还包括“退出请按“返回”键”字样。
采用上述技术方案,可以不借助其他设备,只需投屏源端和投屏目的端设置对应应用,如上述投屏应用,即可实现多个投屏源端到一个投屏目的端的多对一投屏。如,如在开会、发布会演示等场景下,多个手机,平板电脑可将其显示屏上的内容(如PPT,所播视频)投射到同一个大屏设备上呈现,实现了多对一的投屏。提高了多设备协同 使用的效率,提高了用户的使用体验。允许用户使用投屏目的端的输入设备对投屏界面进行控制,还可实现对投屏源端的反控。能够通过设置焦点并根据用户操作在不同源端设备的投屏界面之间切换焦点,实现对不同投屏源端的独立控制。投屏目的端还可以根据源端设备的增加或减少进行对呈现的投屏界面的布局进行调整,以给用户呈现最佳的视觉效果。另外,支持图层过滤,即可将当前界面中部分元素(如用户拖拽的元素,或预定元素)所在图层投射到投屏目的端。这样,可确保投屏源端的隐私信息不被投射到投屏目的端,保护了用户的隐私。另外,在仅投射界面中的UI控件的场景中,可将需投射的内容由纯视频流更换为指令流,这样,可提高投屏目的端投屏界面的显示效果,还可节约传输带宽。
通过上述实施例中描述可知,采用本实施例提供的方案,多个第二终端与第一终端在连接的情况下,可实现这多个第二终端显示的界面在第一终端上的同时呈现,实现多对一投屏。从而满足开会、发布会演示等场景下,多个设备的显示界面在同一个设备(如,大屏设备)上呈现的需求。随着全球化的发展,跨地域办公越来越普遍,远程会议沟通的需求不断增加。但是现有的视频会议终端在远程共享文档的操作非常麻烦,需要安装并登录专业的付费客户端,并且需要连接电脑等其他设备,导致每次开会需要携带各种设备和连接线并提前准备,降低了会议效率,增加了跨地域办公的沟通成本。另外,随着手机等智能设备在办公中的应用,用户很多文件、数据都保存在手机上。因此,在本申请另外一个实施例中,可将本实施例提供的多对一投屏方案与畅连通话相结合,实现跨地域办公。这种跨地域办公方式,可以提升会议效率,节省跨地域办公的沟通成本。
其中,畅连通话实现了多设备间的高清音视频通话,可以在手机与手机、大屏设备、带屏智能音箱等设备间视频通话,并可以在这些设备间自由接续,选择最优的设备接听,给消费者带来更畅快、更自由的通话体验。同时,给用户提供很好的音视频通话体验,能够实现1080P高清视频通话,在暗光和网络质量不好(如:地铁或高铁场景)的情况下,也能够保持流畅。
例如,结合图21-图26,对将上述多对一投屏方案与畅连通话相结合实现跨地域办公的具体方案进行举例介绍。
地域A与地域B的与会人员需进行跨地域办公。地域A包括一个第一终端,如大屏设备A。地域B包括一个第三终端,如大屏设备B。大屏设备A与大屏设备B进行畅连通话。如图21所示,大屏设备A显示地域B的会场画面,还可显示本地(即地域A)的会场画面。类似的,大屏设备B显示地域A的会场画面,还可显示本地(即地域B)的会场画面。其中,大屏设备(如大屏设备A和大屏设备B)上显示的对方会场的会场画面,是该大屏设备根据对端大屏设备实时采集的视频数据绘制的。大屏设备显示的本地的会场画面,是根据自身实时采集到的视频数据绘制的。大屏设备间可通过两者之间建立的远场数据通道传输实时采集到的视频数据。
地域A的参会人员可将一个或多个第二终端,如手机1和手机2上显示的文档(如分别称为文档1和文档2)采用上述实施例提供的多对一投屏方案投射到地域A的大屏设备A上。如,可以采用跨设备拖拽的方式,或无线投屏的方式实现手机1上显示的文档1,手机2上显示的文档2在大屏设备A上的投射。作为一种示例,手机1可 通过与大屏设备A间建立的近场数据通道向大屏设备A发送投屏数据A1,用于大屏设备A显示文档1,以实现手机1上显示的文档1在大屏设备A上的展示。手机2通过与大屏设备A间建立的近场数据通道向大屏设备A发送投屏数据A2,用于大屏设备A上显示文档2,以实现手机2上显示的文档2在大屏设备A上的展示。即结合图21,如图22所示,大屏设备A可根据接收到的投屏数据A1,投屏数据A2,来自大屏设备B的视频数据及大屏设备A自身采集到的视频数据,在大屏设备A的屏幕上显示地域B的会场画面,地域A的会场画面,手机1投射的文档1和手机2投射的文档2。其中,在大屏设备A的屏幕上,本地的会场画面,即地域A的会场画面也可以不显示。
如上面的描述,大屏设备A和大屏设备B分别会实时采集本地的会场画面,并将对应视频数据发送给对端大屏设备。在大屏设备A接收到手机1和手机2的投屏,即接收到上述投屏数据A1和投屏数据A2后,大屏设备A不仅需要将实时采集到的视频数据发送给大屏设备B,还可将该投屏数据A1和投屏数据A2,通过与大屏设备B间的远场数据通道发送给大屏设备B,这样,大屏设备B也可在其屏幕上显示文档1和文档2。结合图21,如图22所示,大屏设备B可根据来自大屏设备A的投屏数据A1,投屏数据A2和视频数据,在大屏设备B的屏幕上显示地域A的会场画面,文档1和文档2。其中,在大屏设备B的屏幕上,大屏设备B也可以根据自身采集到的视频数据,显示本地的会场画面,即地域B的会场画面。
类似的,地域B的参会人员也可将一个或多个第二终端,如手机3和手机4上显示的文档(如分别称为文档3和文档4)采用上述实施例提供的多对一投屏方案投射到地域B的大屏设备B上。之后,大屏设备A和大屏设备B可分别显示对应的会场画面及两个地域的文档。如以手机3用于实现投屏的投屏数据称为投屏数据B1,手机4用于实现投屏的投屏数据称为投屏数据B2为例。结合图22,如图23所示,大屏设备A可根据来自手机1的投屏数据A1,来自手机2的投屏数据A2,来自大屏设备B的视频数据,投屏数据B1和投屏数据B2,以及大屏设备A自身采集到的视频数据,在大屏设备A的屏幕上显示地域B的会场画面,地域A的会场画面,手机1投射的文档1,手机2投射的文档2,手机3投射的文档3和手机4投射的文档4。类似的,如图23所示,大屏设备B可根据来自手机3的投屏数据B1,来自手机4的投屏数据B2,来自大屏设备A的视频数据,投屏数据A1和投屏数据A2,在大屏设备B的屏幕上显示地域1的会场画面,手机1投射的文档1,手机2投射的文档2,手机3投射的文档3和手机4投射的文档4。
在本实施例中,对于大屏设备来说,可将大屏设备的屏幕上用于展示视频通话画面,如上述会场画面的区域称为视频通话区域,将用于展示投屏界面,如上述文档的区域称为文档展示区域,如图23中所示。在一些实施例中,视频通话区域和文档展示区域在大屏设备屏幕上的布局可以是预先定义的。预定义的布局方式不限于图23中所示的水平布局,还可以垂直布局,以画中画的方式布局等。当大屏设备当前仅显示了视频通话画面的情况下,如果接收到手机的投屏数据,则大屏设备可根据预定义的布局方式,将屏幕划分为视频通话区域和文档展示区域两个区域,分别用于展示视频通话画面和对应的投屏界面。
如,以预定义的布局方式是水平布局,手机1投射文档1到大屏设备A为例。结合图21,大屏设备A当前展示视频通话画面,包括地域B的会场画面和地域A的会场画面。地域A参会人员的手机1与大屏设备A连接的情况下,用户通过拖拽的方式触发跨设备投屏。大屏设备A可接收到的投屏请求。如图24中的(a)所示,大屏设备A可显示请求通知2401,用于询问用户手机1请求投屏,是否允许。当用户选择了允许(如选择了允许按钮2402)后,如图24中的(b)所示,大屏设备A可根据预定义的布局方式,将屏幕垂直划分为视频通话区域和文档展示区域两个区域,并呈现手机投射的文档1加入的动画效果,如视频通话画面收起到屏幕的左侧区域,并将文档1展示到屏幕的右侧区域。之后,如图24中的(c)所示,大屏设备A可同时显示视频通话画面和文档1。
另外,在本实施例中,用户还可使用大屏设备的输入设备实现对屏幕上所呈现内容的控制。示例性的,用户可使用大屏设备的遥控器进行布局切换。以大屏设备是大屏设备A为例。如图25中的(a)所示,在大屏设备A同时呈现地域B的会场画面,地域A的会场画面及手机1投射的文档1的情况下,大屏设备A可对应每个用于呈现画面的窗口显示一个全屏按钮,如对应地域B的会场画面的窗口显示全屏按钮2501,对应地域B的会场画面的窗口显示全屏按钮2503,对应文档1的画面的窗口显示全屏按钮2502。大屏设备A在接收到用户对全屏按钮的操作后,可将对应窗口的画面全屏展示,其他窗口的画面隐藏。例如,结合图25中的(a),用户可使用大屏设备A的遥控器,切换屏幕上遥控器操作的焦点位置,如遥控器操作的焦点位置切换到全屏按钮2502,该全屏按钮2502与呈现文档1画面的窗口对应。之后,大屏设备A接收到用户对遥控器确定按钮的操作。作为对该操作的响应,如图25中的(b)所示,大屏设备A全屏显示文档1。地域B的会场画面和地域A的会场画面可隐藏。在大屏设备A全屏展示某画面,如上述文档1的画面时,大屏设备A还可显示缩小按钮,如图25中的(b)所示,大屏设备A可显示缩小按钮2504。大屏设备A在接收到用户对该缩小按钮2504的操作后,可将所有画面同时呈现在屏幕上,如图25中的(a)所示。在其他一些实施例中,大屏设备也可不显示与不同画面对应的全屏按钮。在该实施例中,大屏设备,如大屏设备A在显示多个画面时,可默认设置其中一个画面的窗口为焦点窗口。用户可使用大屏设备A的遥控器的方向键切换焦点窗口。在某个画面的窗口为焦点窗口的情况下,大屏设备A接收到用户对遥控器确认按键的操作,则大屏设备A全屏呈现该焦点窗口的画面。之后,大屏设备A接收到用户对遥控器的确认按键或返回按键的操作,则退出全屏,将所有画面同时呈现在屏幕上。以上示例是以仅展示文档展示区域中的画面为例进行说明的,用户还可通知执行上述对应操作,仅在展示视频通话区域中的画面,本实施例在此不再一一赘述。
在本申请一些实施例中,与大屏设备(包括进行畅连通话的大屏设备A和大屏设备B)连接的投屏源端有多个时,对于在文档展示区域内具体展示哪个或哪些投屏源端投射的内容,可有如下方案:
方案1、在文档展示区域支持多对一共存式分享。如以与大屏设备A连接的投屏源端有两个,分别为手机1和手机2,与大屏设备B连接的投屏源端也有两个,分别为手机3和手机4,则采用该多对一共存式分享方案,大屏设备A和大屏设备B上可 同时展示手机1投射的文档1,手机2投射的文档2,手机3投射的文档3和手机4投射的文档4。例如,如图26中的(a)所示,文档1,文档2,文档3和文档4以四宫格的形式展示在文档展示区域。具体的,文档展示区域被划分为四个文档展示子区域,分别为文档展示子区域1,文档展示子区域2,文档展示子区域3和文档展示子区域4。大屏设备A和大屏设备B分别按照接收到对应投屏数据的先后顺序依次将文档展示在对应文档展示子区域。如投屏数据的先后顺序为:手机1的投屏数据,手机2的投屏数据,手机3的投屏数据,最后为手机4的投屏数据。则大屏设备A和大屏设备B依次将文档1、文档2、文档3和文档4展示在对应文档展示子区域1、文档展示子区域2、文档展示子区域3和文档展示子区域4。
方案2、在文档展示区域支持抢占式分享。即大屏设备上只有1个文档展示区域。当与大屏设备(包括进行畅连通话的大屏设备A和大屏设备B)连接的投屏源端有多个时,后一个投屏的文档可覆盖前一个投屏的文档。例如,结合图26中的(b),手机1先与大屏设备A连接,并投射文档1,即大屏设备A和大屏设备B先接收到了手机1的投屏数据,则大屏设备A和大屏设备B在其文档展示区域显示文档1。之后,手机2与大屏设备A连接,并投射文档2,即大屏设备A和大屏设备B接收到了手机2的投屏数据,则大屏设备A和大屏设备B在其文档展示区域不显示文档1,显示文档2。之后,手机3与大屏设备B连接,并投射文档3,即大屏设备B和大屏设备A接收到了手机3的投屏数据,则大屏设备A和大屏设备B在其文档展示区域不显示文档2,显示文档3。之后,手机4与大屏设备B连接,并投射文档4,即大屏设备B和大屏设备A接收到了手机4的投屏数据,则大屏设备A和大屏设备B在其文档展示区域不显示文档3,显示文档4。
方案3、也可以将上述方案1和方案2相结合。如,大屏设备最多支持四个投屏源端同时将内容呈现在屏幕上,则当投屏源端的数量小于或等于4时,可按照图26中的(a)所示的结果在大屏设备上呈现各投屏源端的内容。当投屏源端的数量大于4时,可采用抢占式分享的方式呈现其投射的内容。如,结合图26中的(a),在大屏设备当前已呈现了手机1、手机2、手机3和手机4投射的内容的情况下,如果手机5需进行投屏,则手机5投射的内容,如文档5可覆盖手机1投屏的文档1呈现在大屏设备上。之后,如果手机6需进行投屏,则手机6投射的内容,如文档6可覆盖手机2投屏的文档2呈现在大屏设备上,依次类推。
采用上述技术方案,不仅可达到上述多对一投屏方案中对应效果;而且当不同地域的两台终端进行畅连通话时,可将不同地域的其他终端的界面或界面中的部分元素呈现在两地的终端上。这两个地域的终端不仅可展示视频通话画面,同时还可以显示本地和对端投射的内容,实现跨地域办公。这样的跨地域办公方式,可以提升会议效率,节省跨地域办公的沟通成本。
图27为本申请实施例提供的一种投屏装置的组成示意图。该装置可以应用于第一终端,第一终端与多个第二终端连接。如图27所示,该装置可以包括:接收单元2701和显示单元2702。
接收单元2701,用于从多个第二终端中每个第二终端接收数据。
显示单元2702,用于根据从多个第二终端接收的数据,在第一终端上显示多个第 一界面,多个第一界面与多个第二终端一一对应;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
进一步的,如图27所示,该装置还可以包括:创建单元2703。
创建单元2703,用于创建多个绘制组件,多个绘制组件与多个第二终端一一对应,绘制组件为视图或画布。
显示单元2702根据从多个第二终端接收的数据,在第一终端上显示多个第一界面,可以包括:显示单元2702根据从多个第二终端接收的数据,在多个绘制组件上分别绘制对应第二终端的第一界面,以在第一终端上显示多个第一界面。
进一步的,如图27所示,该装置还可以包括:配置单元2704和解码单元2705。
配置单元2704,用于配置多个解码参数,多个解码参数与多个第二终端一一对应。
解码单元2705,用于根据多个解码参数,对从对应第二终端接收的数据进行解码。
进一步的,如图27所示,该装置还可以包括:获取单元2706。
获取单元2706,用于获取多个第二终端的连接信息,连接信息用于第一终端与对应第二终端建立连接;其中,多个绘制组件与多个第二终端一一对应,包括:多个绘制组件与多个第二终端的连接信息一一对应;多个解码参数与多个第二终端一一对应,包括:多个解码参数与多个第二终端的连接信息一一对应。
进一步的,如图27所示,该装置还可以包括:输入单元2707。
输入单元2707,用于接收用户对第一界面的窗口的第一操作。
显示单元2702,还用于响应于第一操作,缩小、放大或关闭窗口,或切换焦点窗口。
进一步的,输入单元2702,还用于接收用户对与第二终端对应的第一界面的第二操作。
该装置还可以包括:发送单元2708,用于将第二操作的数据发送给第二终端,用于第二终端根据第二操作显示第三界面。
进一步的,接收单元2701,还用于从第二终端接收更新的数据。
显示单元2702,还用于根据更新的数据,将第二终端对应的第一界面更新为第四界面,第四界面的内容是第三界面内容的镜像,或第四界面的内容与第三界面的部分内容相同。
进一步的,第一终端还与第三终端建立连接;发送单元2708,还用于将从多个第二终端接收的数据发送给第三终端,用于第三终端显示多个第一界面。
在另一种可能的实现方式中,接收单元2701,还用于接收来自第三终端的视频数据。
显示单元2702,还用于在第一终端显示多个第一界面的同时,根据第三终端的视频数据在第一终端上显示视频通话画面。
进一步的,该装置还可以包括:采集单元,用于采集视频数据。发送单元2708,还用于发送视频数据给第三终端,用于第三终端在第三终端上显示多个第一界面的同时,显示视频通话画面。
图28为本申请实施例提供的另一种投屏装置的组成示意图。该装置可以应用于第 二终端,第二终端与第一终端连接。如图28所示,该装置可以包括:显示单元2801、输入单元2802和发送单元2803。
显示单元2801,用于显示第二界面。
输入单元2802,用于接收用户操作。
发送单元2803,用于响应于用户操作,向第一终端发送第二界面的数据,用于第一终端显示与第二终端对应的第一界面,第一终端上还显示有与其他第二终端对应的第一界面;其中,第一界面的内容是对应第二终端显示的第二界面内容的镜像,或第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
进一步的,用户操作可以为开始投屏的操作。该装置还可以包括:获取单元2804,用于获取第二界面的数据。
其中,在第一界面的内容是第二界面内容的镜像的情况下,第二界面的数据为第二界面的录屏数据;在第一界面的内容与第二界面的部分内容相同的情况下,第二界面的数据为第二界面中预定元素所在图层的录屏数据。
进一步的,显示单元2801,还用于显示配置界面,配置界面包括图层过滤设置选项。
输入单元2802,还用于接收用户对图层过滤设置选项的选中操作。
进一步的,输入单元2802接收用户操作,可以包括:输入单元2802接收用户对第二界面或第二界面中元素的拖拽操作。
该装置还可以包括:确定单元2805,用于确定用户的拖拽意图是跨设备拖拽;获取单元2804,还用于获取第二界面的数据。
进一步的,在接收到用户对第二界面中元素的拖拽操作的情况下,该元素可以为视频组件,悬浮窗,画中画或自由小窗,第二界面的数据为元素所在图层的录屏数据;或,该元素可以为第二界面中的用户界面UI控件,第二界面的数据为第二界面的指令流和UI控件的标识,或第二界面的数据为UI控件的绘制指令和标识。
以上实施例对多个终端到一个终端的投屏过程进行了介绍。如上述实施例的描述,在用户使用终端,如手机的过程中,可能存在多任务并行的需求。本申请其他一些实施例提供的投屏方法,作为投屏源端的终端可通过创建多路媒体流,实现该终端的一个或多个应用的内容到作为投屏目的端的其他终端上的投射显示,以满足多任务并行的需求。以下结合附图进行详细描述。其中,在本实施例中,结合图1B,以第一终端101作为投屏源端,第二终端102作为投屏目的端为例。作为一种示例,在该实施例中,第一终端101可以是手机,平板等移动设备,第二终端102可以为PC,电视等大屏设备。
其中,结合上述***架构,本申请实施例示例性说明另一种第一终端101和第二终端102的软件架构。请参考图29,为本申请实施例提供的另一种软件架构的组成示意图。
作为一种示例,第一终端101和第二终端102的软件架构均可以包括:应用层和框架层。
其中,在第一终端101作为投屏源端的情况下,如图29所示,第一终端101可以包括:服务调度与策略选择模块,视频采集模块,音频采集模块,隐私模式设置模块, 音视频编码模块,多设备连接管理协议适配模块和媒体流传输模块。第一终端101包括的各模块可以包含于第一终端101的软件架构的任意一层。如第一终端101包括的上述模块均包含于第一终端101的框架层,本实施例在此不做具体限制。第一终端101还可以包括应用程序,可以包含于上述应用层。
在第二终端102作为投屏目的端的情况下,第二终端102可以包括:视频渲染模块,音频渲染模块,视频裁剪模块,音视频解码模块,多设备连接管理协议适配模块和媒体流传输模块。第二终端102包括的各模块可以包含于第二终端102的软件架构的任意一层。如第二终端102包括的各模块均包含于第二终端102的框架层,本实施例在此不做具体限制。第二终端102还可以包括应用程序,可以包含于上述应用层。
如上述实施例的描述,第一终端101与第二终端102可通过无线或有线的方式建立连接。如,以采用无线方式建立连接为例,第一终端101和第二终端102可通过发现流程发现对方,通过连接流程建立连接,或者说组网。之后,第一终端101和第二终端102之间可提供传输通道,用于两者之间的数据传输,以实现第一终端101中一个或多个应用的内容到第二终端102的显示屏上的显示。
需要说明的是,本实施例示意的软件架构的组成并不构成对终端软件架构组成的具体限定。在另一些实施例中,终端(如上述第一终端101,上述第二终端102)可以包括比图示更多或更少的模块,或者组合某些模块,或者拆分某些模块,或者不同的模块布置。如,上述第一终端101可以不包括隐私模式设置模块。又如,上述第一终端101不包括音频采集模块,第二终端102不包括音频渲染模块。再如,上述第一终端101不包括视频采集模块,第二终端102不包括视频渲染模块和视频裁剪模块。再如,上述第二终端102不包括视频裁剪模块。
在本实施例中,结合图29所示的软件架构,作为投屏源端的第一终端101可通过创建多路媒体流,将其一个或多个应用的内容投射到作为投屏目的端的第二终端102的显示屏上显示。
例如,以作为投屏源端的第一终端101为手机,作为投屏目的端的第二终端102为电视为例。在手机将其一个应用的内容,如包括音频和界面内容投射到电视的场景下,基于图29所示的软件架构,手机的视频采集模块和音频采集模块,可根据服务调度与策略选择模块中定制的媒体策略,进行音频抽取和视频抽取,以获得音频数据和视频数据。手机的视频采集模块和音频采集模块可将采集到的音频数据和视频数据传输给手机的音视频编码模块。手机的音视频编码模块可分别对音频数据和视频数据进行编码、拆包后存储到缓存队列中。另外,手机的多设备连接管理协议适配模块可启动网络监听和连接管理。当监听到设备,如电视有连接请求时,手机可与电视建立连接,以建立手机与电视之间的连接通道。之后,手机的媒体流传输模块可从缓存队列中取出缓存的音频数据和视频数据,并通过手机与电视之间的连接通道传输给电视,如传输给电视的媒体流传输模块。电视的媒体流传输模块接收到数据后,交由电视的音视频解码模块进行组包、解码后,以获得音频数据和视频数据。之后,电视的音视频解码模块将音频数据传输给电视的音频渲染模块,由音频渲染模块输出对应的音频。电视的音视频解码模块将视频数据传输给电视的视频渲染模块,由视频渲染模块输出对应的视频,即显示对应的界面内容。其中,手机进行音视频抽取、编码、拆包及缓 存的过程可以称为创建媒体流。这样,手机通过创建一路媒体流(如称为第一路媒体流),可完成手机上一个应用的内容到电视的投射。
类似的,手机还可创建另外一路或多路媒体流(如称为第二路媒体流,第三路媒体流等),实现手机上应用的内容到电视或其他投屏目的端的投射。创建的其他路媒体流,如上述第二路媒体流,第三路媒体流等,可以是针对该应用的内容创建的媒体流,也可以是针对其他应用的内容创建的媒体流。以手机创建了另外一路媒体流(如称为第二路媒体流)为例,如该第二路媒体流和第一路媒体流是针对不同应用创建的媒体流,则手机通过创建两路媒体流,可实现不同应用的内容到投屏目的端的投射显示,这样可满足用户多任务并行的需求,提高终端的使用效率。
以下结合具体场景,对本实施例提供的投屏方法进行详细介绍。
场景1:手机A不支持多任务并行。用户在使用手机A时,想同时查看手机A的APP1的内容和APP2的内容。其中,APP1可以为本申请实施例中的第一应用。APP2可以为本申请实施例中的第二应用。如,APP1是视频应用,APP2是健身应用。在本实施例中,手机A(手机A可以为上述第一终端)可作为投屏源端,将这两个应用的内容投射到作为投屏目的端的一个或多个其他终端,以满足用户同时查看视频应用和健身应用内容的需求。以投屏目的端包括一个终端,如电视(电视可以为上述第二终端)为例。在跨设备拖拽场景中,用户可通过拖拽的方式触发手机A通过创建两路媒体流,以将手机A上视频应用的内容和健身应用的内容投射到电视上。
以下结合图12-图14,对场景1下的投屏过程过程进行介绍。
手机A与电视建立连接。手机A与电视建立连接的描述与上述图4所示实施例S401中对应内容的描述类似,此处不再详细赘述。
在手机A和电视连接的情况下,手机A可作为投屏源端将应用的内容投射到作为投屏目的端的电视上。
其中,手机A将应用的内容投射到电视上的具体描述与上述实施例中手机1或手机2投射内容到电视上的描述类似,此处不再一一赘述。例如,此实施例中,以用户通过拖拽的方式触发手机A开始投射应用的内容至电视为例进行说明。如,应用的内容可以包括手机A显示的该应用的界面内容。
例如,手机A当前显示视频应用的界面。用户可针对手机A显示的该视频应用的界面或界面中的元素执行拖拽操作。手机A可接收该拖拽操作。该拖拽操作可以为本申请实施例中的第一操作。可以理解的是,拖拽可以分为设备内拖拽和跨设备拖拽(或者说设备间拖拽)。设备内拖拽可以是指拖拽意图是将被拖拽的对象由该设备的一个位置拖动到该设备的另一个位置的拖拽。跨设备拖拽可以是指拖拽意图是将被拖拽的对象由该设备的一个位置拖动到另一个设备中的拖拽。在本实施例中,手机A可以在接收到用户的拖拽操作后,判断用户的拖拽意图是否是跨设备拖拽。如果确定用户的拖拽意图是跨设备拖拽,则开始视频应用的内容,如视频应用的界面内容到电视上的投射。作为一种示例,手机A可针对当前显示的视频应用的界面,进行视频抽取,以获得对应的视频数据,并将该视频数据发送给作为投屏目的端的电视。该视频数据可以用于视频应用的界面或界面中的元素在投屏目的端投射显示。该视频数据可以为本申请实施例中第一应用的界面的数据。
如上述描述,用户拖拽的对象可能是视频应用的界面,也可能是视频应用的界面中的元素,如视频元素、画中画或悬浮窗等。
在用户拖拽的对象是手机A显示的视频应用的界面时,结合图29,手机A进行视频抽取,获得对应视频数据的过程可以是:在确定用户的拖拽意图是跨设备拖拽后,手机A创建虚拟显示(VirtualDisplay)。如手机A的视频采集模块向手机A的显示管理器发送创建VirtualDisplay的请求,手机A的显示管理器完成VirtualDisplay的创建后,可将创建的VirtualDisplay返回给手机A的视频采集模块。之后,手机A可将视频应用启动到VirtualDisplay中,或者说,将视频应用的界面绘制移到该VirtualDisplay中。另外,手机A还可将VirtualDisplay绑定到手机A的视频采集模块以进行录屏,或者说进行视频抽取。这样,手机A的视频采集模块可获得对应视频数据。
在用户拖拽的对象是视频应用的界面中元素(该元素可以为本申请实施例中的第一元素)的场景下,手机A可仅将该元素投射到投屏目的端。此时,结合图29,手机A进行视频抽取,获得视频数据的过程可以是:在确定用户的拖拽意图是跨设备拖拽后,手机A创建VirtualDisplay。之后,手机A可将视频应用的界面中用户拖拽的元素的绘制移到该VirtualDisplay中。手机A还可将VirtualDisplay绑定到手机A的视频采集模块以进行录屏,或者说进行视频抽取。这样,手机A的视频采集模块可获得对应视频数据。其中,手机A将应用的界面中用户拖拽的元素的绘制移到VirtualDisplay的具体实现可以是:手机A在接收到用户对视频应用的界面中元素的拖拽操作后,可获取该元素在当前界面中的层名称(或者说图层名称,layer Name)。手机A可将该视频应用的界面逐图层合成到VirtualDisplay中。在逐图层合成的过程中,手机A可判断当前需合成的图层的层名称是否与被拖拽的元素所在图层的层名称相同。如果相同,则手机A将该图层合成到VirtualDisplay中。如果不相同,则手机A不将该图层合成到VirtualDisplay中。这样,可仅将用户拖拽的元素所在图层合成到VirtualDisplay中,以便获得的视频数据可用于实现视频应用的界面中用户拖拽的元素在投屏目的端投射显示。
需要说明的是,在用户拖拽的对象是视频应用的界面的情况下,手机A也可以仅投射该界面中的特定元素,如视频元素到投屏目的端,以保护用户的隐私。如手机A可提供设置界面供用户开启或关闭该功能,如称为隐私模式。在用户选择开启隐私模式时,手机A仅将该界面中特定元素所在图层合成到VirtualDisplay中,以获得视频数据。在用户选择关闭隐私模式时,手机A可将该界面所有的图层合成到VirtualDisplay中,以获得视频数据。
在手机A获取到视频数据后,可将视频数据进行编码后发送给作为投屏目的端的电视。如手机A的视频采集模块获得视频数据后,可将采集到的视频数据传输给手机A的音视频编码模块。手机A的音视频编码模块可对该视频数据进行编码、拆包后存储到缓存队列中。之后,手机A可将缓存队列中的视频数据发送给电视。如,手机A的媒体流传输模块可从缓存队列中取出缓存的视频数据,并通过手机A与电视之间的连接通道传输给电视,如传输给电视的媒体流传输模块。
之后,电视接收到视频数据后,可根据该视频数据,在电视上显示视频应用对应 的界面或界面中的元素。如,结合图29,电视的媒体流传输模块接收到数据后,将该数据交由电视的音视频解码模块进行组包、解码后,可以获得对应的视频数据。之后,电视的音视频解码模块将视频数据传输给电视的视频渲染模块,由视频渲染模块显示对应的界面内容。这样,可实现手机A中视频应用的界面或界面中的元素在电视上投射显示,或者说实现了视频应用从手机A到电视上的“转移”。用户可在电视上继续查看视频应用的内容。
例如,继续结合图12,以用户拖拽的对象是视频应用的界面中的元素,如视频元素为例。如图12中的(a)-图12中的(c)所示,用户对视频元素1201执行拖拽操作,如长按并向右移动手指的操作。随着用户手指的移动,手机可绘制并显示视频元素1201随用户手指移动的动画。在视频元素1201被拖拽的过程中,手机A在确定用户的拖拽意图是跨设备拖拽后,手机A创建虚拟显示,如称为虚拟显示1(该虚拟显示1可以为本申请实施例中的第一虚拟显示),并将当前界面中该视频元素1201(该视频元素1201可以为本申请实施例中的第一元素)所在图层绘制到该虚拟显示1上,以便手机A进行视频抽取,以获得视频数据,如称为视频数据a(该视频数据a可以为本申请实施例中的第一应用的界面的数据)。手机A可将该视频数据a进行编码、拆包后存储到缓存队列中。之后,手机A可将缓存队列中的视频数据a发送给电视。电视接收到视频数据a后,对视频数据a进行组包、解码后,进行渲染,以在电视上显示视频元素1201中播放的视频X。实现了视频应用从手机A到电视上的“转移”,用户可在电视上继续观看视频X。
另外,如上述实施例的描述,在一些实施例中,电视可以在用户释放对手机A上对象的拖拽后,在电视上显示被拖拽的对象。如,上述手机A将缓存队列中的视频数据a发送给电视,具体的可以为:在手机A在接收到用户释放对上述视频元素501的拖拽后,向电视发送视频数据a,用于电视显示视频X。在其他一些实施例中,为了能够提高用户跨设备拖拽时的跟手体验,给用户该对象从手机A拖拽到电视的视觉效果,在对象被拖拽的过程中,如果该对象的部分区域溢出显示屏,则可在手机A和电视上同时显示该对象。
例如,继续结合图12,手机A在确定用户的拖拽意图是跨设备拖拽后,手机A可进行视频抽取以获得视频数据a,并将该视频数据a进行编码、拆包后发送给电视。手机A还可将视频元素1201的矩形信息,及在拖拽过程中该视频元素1201某个角(如左上角)的坐标信息发送给电视。电视根据接收到的视频元素1201的矩形信息,在拖拽过程中视频元素1201左上角的坐标信息和手机A的分辨率,可在确定视频元素1201存在区域溢出手机A的显示屏后,根据视频元素1201的矩形信息,拖拽过程中视频元素1201左上角的坐标信息及手机A的分辨率确定能够在电视的显示屏上对应显示的该视频元素1201的区域的信息。之后,电视对视频数据a进行组包、解码后,根据确定的区域的信息和解码组包后的视频数据a进行界面渲染,可实现视频元素1201中播放的视频X在电视上的绘制。如图13中的(a)所示,在电视的显示屏上显示界面1,该界面1中的内容与手机A的视频元素1201中所承载视频X溢出手机显示屏的内容相同。在用户拖拽视频元素1201的过程中,手机A可实时获取视频数据a和拖拽过程中视频元素1201左上角的坐标信息,并发送给电视。这样,电视可根据接收到的 数据实时更新界面1。在用户释放拖拽后,电视可根据实时接收到的视频数据a,在电视的显示屏上全屏显示界面1。如图13中的(b)所示,此时,界面1中的内容与视频元素1201中所承载视频X的全部内容相同。其中,界面1可以为本申请实施例中的第一界面。
其中,在本实施例中,上述对应用的内容进行的抽取、编码、拆包及缓存的过程可以称为创建媒体流。即,结合上述示例,在视频应用的内容包括界面内容的情况下,手机A通过创建虚拟显示(如虚拟显示1),并利用该虚拟显示1可实现一路媒体流(如称为第一路媒体流)的创建。之后,手机A通过将创建的该第一路媒体流对应的数据,如上述视频数据a,或称为第一路视频数据发送给电视,可实现视频应用的界面内容到电视的投射。
类似的,手机A可通过创建另外一路或多路媒体流,实现手机A上其他应用的内容到电视的投射。如在上述场景1中,用户想同时使用上述视频应用和健身应用,手机A可针对健身应用创建另外一路媒体流,如称为第二路媒体流,以实现健身应用的内容,如界面内容到电视的投射。
针对健身应用创建媒体流以实现健身应用的内容到电视端投射的过程与上述针对视频应用创建媒体流实现视频应用的内容到电视端投射的过程类似,此处不再详细赘述。此处结合示例进行简单说明。例如,在将手机A中视频应用的内容投射到电视上后,如图14所示,用户打开手机A的健身应用(该打开健身应用的操作可以为本申请实施例中的第二操作)查看健身视频。手机A接收到用户对承载该健身视频的视频元素(该视频元素可以为本申请实施例中的第二元素)的拖拽操作(该拖拽操作可以为本申请实施例中的第三操作)。手机A响应该拖拽操作,使该视频元素在手机A显示屏上跟随用户手指的移动而移动,给用户以视频元素被用户手指拖动的视觉效果。在视频元素被拖拽的过程中,手机A可判断用户的拖拽意图是否是跨设备拖拽。在手机A确定用户的拖拽意图是跨设备拖拽后,手机A可创建另一虚拟显示,如称为虚拟显示2(该虚拟显示2可以为本申请实施例中的第二虚拟显示),并将当前界面中该视频元素所在图层绘制到该虚拟显示2上,以便手机A进行视频抽取,以获得视频数据,如称为视频数据b(该视频数据b可以为本申请实施例中的第二应用的界面的数据)。手机A可将该视频数据b进行编码、拆包后存储到缓存队列中。之后,手机A可将缓存队列中的视频数据b发送给电视。另外,手机A还可将该视频元素的矩形信息,及在拖拽过程中该视频元素某个角(如左上角)的坐标信息发送给电视。电视可接收视频数据b,视频元素的矩形信息,及在拖拽过程中视频元素左上角的坐标信息。电视根据接收到的视频元素的矩形信息,在拖拽过程中视频元素左上角的坐标信息和手机A的分辨率,在确定视频元素存在区域溢出手机A的显示屏后,电视可根据该视频元素的矩形信息,拖拽过程中视频元素左上角的坐标信息及手机A的分辨率确定能够在电视的显示屏上对应显示的该视频元素的区域的信息。电视对视频数据b进行组包、解码后,根据确定的区域的信息和解码组包后的视频数据b进行界面渲染,可实现界面2的绘制,该界面2中的内容与手机A中健身应用的健身视频溢出手机显示屏的内容相同。这样,电视可同时将手机A的视频应用的内容和健身应用的内容显示在电视显示屏上。如电视当前全屏显示有视频应用的内容(如上述界面1)。
在一些实施例中,如上述实施例中的描述,如图13中的(c)所示,电视可在电视的显示屏上以小窗口(或者说画中画,悬浮窗)的形式显示上述界面2。在用户拖拽视频元素的过程中,手机A可实时获取视频数据b和拖拽过程中视频元素左上角的坐标信息,并发送给电视。这样,电视可根据接收到的数据实时更新界面2。在用户释放拖拽后,如图13中的(d)所示,电视可根据实时接收到的视频数据b,在电视的显示屏上继续以小窗口的形式显示界面2,此时,界面2中的内容与健身应用的健身视频的全部内容相同。由上述描述可以得到的是,手机A通过创建虚拟显示2,并利用该虚拟显示2实现另外一路媒体流,如称为第二路媒体流的创建。手机A通过将创建的第二路媒体流对应的数据,如上述视频数据b,或称为第二路视频数据发送给电视,实现了健身应用的内容到电视的投射。其中,界面2可以为本申请实施例中的第二界面。包括了界面2的内容和界面1的内容的界面可以为本申请实施例中的第三界面。
如此,手机A的视频应用和健身应用的内容,如界面内容均投射到了作为投屏目的端的电视上,满足了用户同时查看视频应用和健身应用内容的需求。
在其他一些实施例中,上述应用的内容还可以包括音频。如用户使用手机A的应用(如视频应用)观看视频,或使用手机A的音乐应用听音乐时,在用户触发手机A开始投射应用的内容到投屏目的端时,手机A不仅可以将当前显示的应用的界面内容投射到投屏目的端,还可将音频也投射到投屏目的端。在这样的情况下,手机A不仅需向电视发送上述视频数据(如视频数据a或视频数据b),还需向电视发送音频数据。
其中,视频数据用于电视在电视的显示屏上显示对应界面,音频数据用于电视播放对应声音。如上述实施例的描述,音频数据可通过创建音频录音(AudioRecord)对象来获得。也就是说,在用户触发手机A开始投射应用的内容时,在应用的内容包括界面内容和音频的情况下,手机A可通过创建虚拟显示和AudioRecord对象,并利用虚拟显示和AudioRecord对象实现一路媒体流的创建,之后,通过创建的该路媒体流将对应的视频数据和音频数据发送给电视,以实现应用的内容,包括界面内容和音频到电视的投射。其中,在本实施例中,手机A可预先创建多个AudioRecord对象,用于后续进行不同路媒体流的音频抽取。如,可用于后续进行不同应用的音频抽取,即可基于创建的AudioRecord对象将需要投射的应用的音频数据重定向到对应媒体流上,其他音频数据仍从投屏源端输出。
如,继续结合图12所示的示例,视频应用的内容包括界面内容和音频。手机A预先创建两个AudioRecord对象,并创建缓存。用户在触发视频应用的内容开始投射后,手机A可通过创建第一路媒体流实现视频应用的内容到电视的投射。其中,视频应用的界面内容到电视的投射过程如上述实施例的描述,此处不再赘述。另外,手机A还可调用AudioRecord对象进行音频抽取以获得音频数据,如称为音频数据a,用于实现视频应用的音频到电视的投射。作为一种示例,结合图29,具体获取音频数据a的过程可以包括:手机A,如手机A的音频采集模块可调用预先创建的两个AudioRecord对象中的一个AudioRecord对象,如称为AudioRecord对象1(AudioRecord对象1可以为本申请实施例中的第一AudioRecord对象)。在AudioRecord对象1被 调用后,手机A的音频采集模块可对视频应用播放的视频中的音频进行录制,以获得音频数据,如称为音频数据a(该音频数据a可以为本申请实施例中第一应用的音频数据)。手机A的音频采集模块获得音频数据a后,可将采集到的音频数据a传输给手机A的音视频编码模块。手机A的音视频编码模块可对该音频数据a进行编码、拆包后存储到缓存中。
之后,手机A的媒体流传输模块可从buffer中获得音频数据a,通过手机A与电视之间的连接通道发送给电视。电视接收到音频数据a后,可根据该音频数据a,由电视输出对应的音频。如结合图29,电视的媒体流传输模块接收到数据后,将该数据交由电视的音视频解码模块进行组包、解码后,可以获得对应的音频数据a。之后,电视的音视频解码模块将音频数据a传输给电视的音频渲染模块,由音频渲染模块输出对应音频。这样,实现了手机A中视频应用的音频到电视上投射。至此,手机A的其他音频仍通过手机A输出。
类似的,在用户触发健身应用的内容开始投射后,如果健身应用的内容包括界面内容和音频,则手机A可通过创建第二路媒体流,实现健身应用的内容到电视的投射。其中,健身应用的界面内容到电视的投射过程如上述实施例的描述,此处不再赘述。另外,手机A还可调用手机A预先创建的另外一个AudioRecord对象,如称为AudioRecord对象2(AudioRecord对象2可以为本申请实施例中的第二AudioRecord对象),实现健身应用的音频到电视的投射,具体的实现过程与视频应用的音频到电视的投射类似,此处不再赘述。至此,手机A的视频应用和健身应用的音频通过电视输出,其他音频通过手机A输出。在一些实施例中,当存在两路音频需要通过电视输出时,电视可选择其中一路音频输出。如,以电视以大窗口(即全屏显示的窗口)和小窗口形式显示不同应用投射来的界面内容为例,电视可配置不输出小窗口的音频,输出大窗口的音频。如,结合图13中的(d)所示的示例,电视播放视频X的声音,不播放健身视频的声音。
在一些实施例中,可以配置媒体策略,用于创建上述媒体流。其中,媒体策略可以是预先配置的,也可以提供配置界面(如,该配置界面可以为图8所示的界面)供用户设置。不同路媒体流对应的媒体策略可以相同,也可以不同。一路媒体流对应的媒体策略可以包括:是否分布音频(或说投射音频),是否分布视频(或者说投射界面内容),分布视频时对应虚拟显示的参数(如包括:名称,宽度,高度,码率,编码格式,每英寸点数(dots per inch,DPI)等),分布音频时采集的音频的规格等。这样,在需要进行应用内容的投射时,手机A可根据对应的媒体策略确定是否需要投射音频,是否需要投射视频,并根据对应的参数采集指定规格的视频数据,指定规格的音频数据。
另外,在电视显示投屏源端投射的多个界面的情况下,电视可默认设置其中一个界面的窗口为焦点窗口,如电视默认小窗口为焦点窗口。继续结合图13,如图13中的(d)所示,电视显示提示标识1301,用于向用户提示小窗口,即界面2的窗口为焦点窗口。用户可使用电视的遥控器选择切换焦点窗口,还可进行大小窗布局的切换,还可关闭大小窗口。其中用于全屏显示界面1的窗口可以称为大窗口。
如,电视接收到用户对遥控器的左按键或右按键的操作,则切换焦点窗口。当焦 点窗口为小窗口时,如果电视接收到用户对遥控器的确认按键的操作,如图13中的(e)所示,电视可将小窗口,即界面2全屏显示,将大窗口,即界面1以小窗口的形式显示。如果电视接收到用户对遥控器的返回键的操作,则电视可停止显示小窗口,或者说关闭小窗口,电视还可通知手机A停止投射该小窗口对应的应用的内容。另外手机A可将该小窗口对应的应用切换到主屏继续运行。如果用户继续接收到用户对遥控器的返回键的操作,电视可以停止显示大窗口,电视还可通知手机A停止投射该大窗口对应的应用的内容。另外,手机A可停止小窗口对应的应用在主屏的运行,开始大窗口对应应用在主屏运行。
需要说明的是,以上实施例以采用大窗口和小窗口形式在投屏目的端显示不同媒体流对应的投射内容仅是一种举例。在其他一些实施例中,投屏目的端也可以采用其他排列布局,如垂直排列,水平排列的方式来显示不同媒体流对应的窗口,本申请实施例在此对投射目的端显示不同媒体流对应窗口的具体实现并不做具体限制。投屏目的端还可根据投射来的媒体流的路数,动态调整投屏目的端显示的与各路媒体流对应窗口的大小和排列布局。其中,投射来的媒体流的路数可以动态的增加或减少。当投射来的媒体流的路数增加或减少后,投屏目的端可根据当前投射来的媒体流的路数,调整各媒体流对应窗口的大小和排列布局。
上述场景1是以投屏源端将其多个应用的内容投射到同一个投屏目的端为例进行说明的。在其他一些实施例中,投屏源端也可以将其多个应用投射到不同的投屏目的端。如,以下结合场景2进行说明。
场景2:手机B不支持多任务并行。用户在使用手机B时,想同时查看手机B的APP3的内容和APP4的内容。如,APP3是健身应用,APP4是教育应用。其中,APP3可以为本申请实施例中的第一应用,APP4可以为本申请实施例中的第二应用。那么,手机B(手机B可以为上述第一终端)可作为投屏源端,将这两个应用的内容投射到作为投屏目的端的一个或多个其他终端,以满足用户同时查看健身应用和教育应用内容的需求。
以投屏目的端包括两个终端,如电视和平板(电视可以为本申请实施例中的第二操作,平板可以为本申请实施例中的第三终端)为例。手机B可通过创建两路媒体流,以将健身应用的内容投射到电视上,教育应用的内容投射到平板上。其中,具体实现与上述场景1中对应描述类似,此处不再详细赘述,区别在于,手机B创建的两路媒体流,其中一路媒体流对应的数据传输给了电视,用于实现健身应用的内容在电视上的投射,另一路媒体流对应的数据传输给了平板,用于实现教育应用的内容在平板上的投射。
结合图30,继续以用户通过拖拽的方式触发手机B开始投射应用的内容为例进行说明。即在跨设备拖拽场景中,用户可通过拖拽的方式触发手机B通过创建两路媒体流,以将手机B上健身应用的内容投射到电视上,将教育应用的内容投射到平板上。如,健身应用的内容和教育应用的内容均包括界面内容和音频。
例如,手机B与平板和电视均建立了连接。手机B预先创建了两个AudioRecord对象。用户打开手机B的健身应用查看健身视频。手机B接收到用户对承载该健身视频的视频元素(该视频元素可以为本申请实施例中的第一元素)的拖拽操作。在视频 元素被拖拽的过程中,手机B可判断用户的拖拽意图是否是跨设备拖拽。在确定用户的拖拽意图是跨设备拖拽后,手机B可创建虚拟显示,如虚拟显示A(该虚拟显示A可以为本申请实施例中的第一虚拟显示),并调用预先创建的两个AudioRecord对象中的一个,如AudioRecord对象A(该AudioRecord对象A可以为本申请实施例中的第一AudioRecord对象)。利用虚拟显示A和AudioRecord对象A手机B可实现一路媒体流的创建,以获得相应的视频数据和音频数据,如分别称为视频数据a'和音频数据a'。之后,手机B可将视频数据a'和音频数据a'发送给与手机B连接的平板或电视以实现健身应用内容到投屏目的端的投射。
作为一种示例,手机B可根据用户的选择操作,将平板和电视中的一个终端作为投屏目的端。如,在手机B确定用户的拖拽意图是跨设备拖拽后,手机B可显示设备列表,该设备列表中包括平板的设备标识和电视的设备标识。用户可在设备列表中选择设备标识,以便手机B确定此次投射的投屏目的端。如手机B接收到用户对电视的设备标识的选择操作,表明用户想将健身应用的内容投射到电视上,则根据用户的该选择操作,手机B可将上述视频数据a'和音频数据a'发送给电视。
作为又一种示例,手机B可根据用户执行的拖拽操作的拖拽方向和与手机B连接的终端相对于手机B的方向,确定此次投屏的投屏目的端。作为一种示例,在手机B确定用户的拖拽意图是跨设备拖拽后,手机B可获取与手机B连接的各终端相对于手机B的方向,并将拖拽方向上的终端确定为此次投屏的投屏目的端。如,平板位于指向手机上边缘的方向上,电视位于指向手机右边缘的方向上,用户执行拖拽操作的拖拽方向为向右拖拽。手机B在确定用户的拖拽意图是跨设备拖拽后,可获取与手机B连接的电视和平板相对于手机B的方向。根据与手机B连接的电视和平板相对于手机B的方向,以及拖拽方向,手机B可确定电视位于拖拽方向上,表明用户想将健身应用的内容投射到电视,则手机B可将上述视频数据a'和音频数据a'发送给电视。其中,其他终端相对手机B的方向,手机B可利用蓝牙、超宽带(Ultra-wideband,UWB)、超声波等定位技术获得。
电视接收到来自手机B的视频数据a'和音频数据a'后,可进行组包、解码后,进行音视频渲染,以在电视上显示健身视频,如图30中的3001所示,并播放对应的音频,实现了手机B的健身应用的内容到电视的投射。
在将手机B中健身应用的内容投射到电视上后,用户打开了手机B的教育应用查看教育视频。手机B接收到用户对承载该教育视频的视频元素的拖拽操作。在视频元素被拖拽的过程中,手机B可在确定用户的拖拽意图是跨设备拖拽后,创建虚拟显示,如虚拟显示B(该虚拟显示B可以为本申请实施例中的第二虚拟显示),并调用预先创建的两个AudioRecord对象中的另一个,如AudioRecord对象B(该AudioRecord对象B可以为本申请实施例中的第二AudioRecord对象)。利用虚拟显示B和AudioRecord对象B手机B可实现另一路媒体流的创建,以获得相应的视频数据和音频数据,如分别称为视频数据b'和音频数据b'。之后,手机B可将视频数据b'和音频数据b'发送给与手机B连接的平板或电视以实现教育应用内容到投屏目的端的投射。
类似于投射健身应用内容的描述,手机B可根据用户的选择操作,或根据用户执 行的拖拽操作的拖拽方向和与手机B连接的终端相对于手机B的方向,确定此次投屏的投屏目的端。如,手机B接收到用户选择平板的操作,或确定平板位于拖拽方向上,表明用户想将教育应用的内容投射到平板,则手机B可将上述视频数据b'和音频数据b'发送给电视。
平板接收到来自手机B的视频数据b'和音频数据b'后,可进行组包、解码后,进行音视频渲染,以在平板上显示教育视频,如图30中的3002所示,并播放对应的音频,实现了手机B的教育应用的内容到平板的投射。
如此,手机B的健身应用和教育应用的内容,如包括界面内容和音频分别投射到了作为投屏目的端的电视和平板上,满足了用户同时查看健身应用和教育应用内容的需求。
在本实施例中,可以将场景1中投屏源端创建多路媒体流发送到同一个投屏目的端实现应用内容的投射的模式称为汇聚模式,将场景2中投屏源端创建多路媒体流发送到多个不同投屏目的端实现应用内容的投射的模式称为分发模式。另外,在本实施例中,也支持投屏源端将其创建的一路媒体流投射到多个投屏目的端,可将这种模式称为广播模式。
在本实施例中,投屏源端可同时支持上述三种视频分配模式,即投屏源端具备实现上述三种视频分配模式的能力。在一些实施例中,投屏源端支持这三种视频分配模式的可配置,如可以提供设置界面供用户设置,也可以是***默认的配置。配置的视频分配模式也可以理解为是上述媒体策略。也就是说,投屏源端可从媒体策略中获得视频分配模式的相关配置。
如,投屏源端具备实现上述三种视频分配模式的能力,用户设置投屏源端的视频分配模式为上述汇聚模式,则在投屏源端创建了多路媒体流后,根据设置,投屏源端可将这多路媒体流投射到同一投屏目的端,以满足用户的多任务需求。以投屏源端创建了两路媒体流为例。结合上述图29,如图31所示,投屏源端根据服务调度与策略选择模块中定制的媒体策略,可获得视频分配模式为汇聚模式。投屏源端根据用户的触发,可进行第一路音视频数据的采集,第二路音视频的采集。投屏源端分别对第一路音视频数据和第二路音视频数据进行音视频编码,以实现两路媒体流的创建。之后,投屏源端根据配置的视频分配模式,即汇聚模式,经多设备连接管理协议适配后可传输给同一个投屏目的端。其中,源端设备可为不同路音视频数据分配不同的标识(如该标识可以是对应虚拟显示的标识,如虚拟显示的名称,或者该标识可以是源端设备为不同路媒体流分配的索引),以便投屏目的端进行分区。投屏目的端接收到音视频数据后,可根据接收到音视频数据标识的不同,区分出第一路音视频数据和第二路音视频数据。之后,对第一路音视频数据和第二路音视频数据分别进行音频视频解码后,分别进行音视频渲染,以实现在投屏目的端这两路媒体流对应应用内容的投射。
又如,投屏源端具备实现上述三种视频分配模式的能力,***默认设置投屏源端的视频分配模式为上述分发模式,则在投屏源端创建了多路媒体流后,根据设置,投屏源端可将这多路媒体流投射到多个不同的投屏目的端,以满足用户的多任务需求。继续以投屏源端创建了两路媒体流为例。结合上述图29,如图32所示,投屏源端根据服务调度与策略选择模块中定制的媒体策略,可获得视频分配模式为分发模式。投 屏源端根据用户的触发,可进行第一路音视频数据的采集,第二路音视频的采集。投屏源端分别对第一路音视频数据和第二路音视频数据进行音视频编码,以实现两路媒体流的创建。之后,投屏源端根据配置的视频分配模式,即分发模式,经多设备连接管理协议适配后可传输给不同的投屏目的端,如将第一路音视频数据传输给投屏目的端1,将第二路音视频数据传输给投屏目的端2。投屏目的端1和投屏目的端2接收到对应音视频数据后,分别对接收到的音视频数据进行音频视频解码后,进行音视频渲染,以实现在投屏目的端1和投屏目的端2上这两路媒体流对应应用内容的投射。
又如,投屏源端具备实现上述三种视频分配模式的能力,***默认设置投屏源端的视频分配模式为上述广播模式,则投屏源端可创建一路媒体流,根据设置将这一路媒体流投射到多个不同的投屏目的端。结合上述图29,如图33所示,投屏源端根据服务调度与策略选择模块中定制的媒体策略,可获得视频分配模式为广播模式。投屏源端根据用户的触发,可进行单路音视频数据的采集。投屏源端对该路音视频数据进行音视频编码,以实现一路媒体流的创建。之后,投屏源端根据配置的视频分配模式,即广播模式,经多设备连接管理协议适配后可传输给不同的投屏目的端,如将该路音视频数据传输给投屏目的端1和投屏目的端2。投屏目的端1和投屏目的端2接收到该路音视频数据后,分别对接收到的音视频数据进行音频视频解码后,进行音视频渲染,以实现在投屏目的端1和投屏目的端2上这一路媒体流对应应用内容的投射。
在其他一些实施例中,投屏源端具备实现上述三种视频分配模式的能力的情况下,投屏源端也可以根据与其连接的设备的数量确定视频分配模式。如,与投屏源端连接的设备存在一个,则投屏源端可以确定视频分配模式为汇聚模式。对于创建的多路媒体流,投屏源端可将这多路媒体流投射到该设备,以实现投屏源端中不同应用内容在同一个投屏目的端的投射,如上述场景1。又如,与投屏源端连接的设备存在多个,则投屏源端可以确定视频分配模式为分发模式。对于创建的多路媒体流,投屏源端可将这多路媒体流投射到不同的设备,以实现投屏源端中不同应用内容在不同投屏目的端的投射,如上述场景2。在另外一些实施例中,在与投屏源端连接的设备存在多个的情况下,在跨设备拖拽场景下,投屏源端也可以根据用户针对不同应用执行拖拽操作时,拖拽方向的不同确定视频分配模式。如,用户针对不同应用执行拖拽操作时的拖拽方向不同,则投屏源端可确定视频分配模式为分发模式,因此,对于创建的多路媒体流,投屏源端可将这多路媒体流投射到不同的设备。又如,用户针对不同应用执行拖拽操作时的拖拽方向相同,则投屏源端可确定视频分配模式为汇聚模式,因此,对于创建的多路媒体流,投屏源端可将这多路媒体流投射到同一设备。
结合上述场景1和场景2的描述,可以看到的是,作为投屏源端的终端可通过创建多路媒体流,实现该终端的多个应用的内容到一个或多个投屏目的端的投射,满足了多任务并行的需求,这样可提高终端的使用效率,提升用户的使用体验。另外,本实施例提供的方案,通过创建虚拟显示,并基于虚拟显示对投屏源端的内容进行屏幕录制、编码后存放到本地缓存中,以实现投屏源端内容在投屏目的端的显示,支持镜像投屏和异源投屏。三方应用可通过集成对应的投屏能力(如,
Figure PCTCN2021135158-appb-000010
提供dll库、
Figure PCTCN2021135158-appb-000011
提供aar包),调用多媒体分布协议(Distributed Multimedia Protocol,DMP)的API接口便可实现投屏,这样可实现在线视频的投射。镜像投屏是指投屏目的端渲 染的音视频跟投屏源端完全一样,投屏源端上打开图片、音频或视频,目的端也显示图片、播放音频或视频;异源投屏是指可将某个应用(如
Figure PCTCN2021135158-appb-000012
应用)或窗口(
Figure PCTCN2021135158-appb-000013
窗口)投射到投屏目的端,可达到既分享又保护隐私的目的。且采用本实施例的方法,通过将多路媒体流发送给同一投屏目的端设备,可在该设备上实现多个投送内容的显示。另外,在本实施例中,可采用用户数据报协议(user datagram protocol,UDP)协议和前向纠错编码(forward error correction,FEC)实现源端媒体流到投屏目的端的传输,可有效缓解丢包和避免拥塞。可使用废弃参考帧技术(invalidate reference frame,IFR),保障丢包后的快速恢复,避免花屏和长时间的卡顿的现象出现。
本申请实施例还提供一种投屏装置,该装置可以应用于电子设备,如上述实施例中的第一终端或者第二终端。该装置可以包括:处理器;用于存储处理器可执行指令的存储器;其中,处理器被配置为执行指令时使得该投屏装置实现上述方法实施例中第一终端(如电视机)或第二终端(如手机)执行的各个功能或者步骤。
本申请实施例提供一种电子设备(如上述第一终端或第二终端),该电子设备包括显示屏,一个或多个处理器和存储器;显示屏,处理器和存储器耦合;存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当计算机指令被电子设备执行时,使得该电子设备实现上述方法实施例中第一终端(如电视机,手机A,手机B)或第二终端(如手机,电视,平板)执行的各个功能或者步骤。当然,该电子设备包括但不限于上述显示屏、存储器和一个或多个处理器。例如,该电子设备的结构可以参考图2所示的手机的结构。
本申请实施例还提供一种芯片***,该芯片***可以应用于前述实施例中的终端(如第一终端或第二终端)。如图34所示,该芯片***包括至少一个处理器3401和至少一个接口电路3402。该处理器3401可以是上述终端中的处理器。处理器3401和接口电路3402可通过线路互联。该处理器3401可以通过接口电路3402从上述终端的存储器接收并执行计算机指令。当计算机指令被处理器3401执行时,可使得终端(如上述第一终端或第二终端)执行上述实施例中电视机或手机执行的各个步骤。当然,该芯片***还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机可读存储介质,用于存储上述终端(如第一终端或第二终端)运行的计算机指令。
本申请实施例还提供一种计算机程序产品,包括上述终端(如第一终端或第二终端)运行的计算机指令。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些 接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (24)

  1. 一种投屏方法,其特征在于,应用于第一终端,所述第一终端与多个第二终端连接,所述方法包括:
    所述第一终端从所述多个第二终端中每个第二终端接收数据;
    所述第一终端根据从所述多个第二终端接收的数据,在所述第一终端上显示多个第一界面,所述多个第一界面与所述多个第二终端一一对应;
    其中,所述第一界面的内容是对应第二终端显示的第二界面内容的镜像,或所述第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述第一终端创建多个绘制组件,所述多个绘制组件与所述多个第二终端一一对应,所述绘制组件为视图或画布;
    所述第一终端根据从所述多个第二终端接收的数据,在所述第一终端上显示多个第一界面,包括:
    所述第一终端根据从所述多个第二终端接收的数据,在所述多个绘制组件上分别绘制对应第二终端的第一界面,以在所述第一终端上显示所述多个第一界面。
  3. 根据权利要求2所述的方法,其特征在于,在所述第一终端根据从所述多个第二终端接收的数据,在所述第一终端上显示多个第一界面之前,所述方法还包括:
    所述第一终端配置多个解码参数,所述多个解码参数与所述多个第二终端一一对应;
    所述第一终端根据所述多个解码参数,对从对应第二终端接收的所述数据进行解码。
  4. 根据权利要求3所述的方法,其特征在于,在所述第一终端从所述多个第二终端中每个第二终端接收数据之前,所述方法还包括:
    所述第一终端获取所述多个第二终端的连接信息,所述连接信息用于所述第一终端与对应第二终端建立连接;
    其中,所述多个绘制组件与所述多个第二终端一一对应,包括:所述多个绘制组件与所述多个第二终端的连接信息一一对应;
    所述多个解码参数与所述多个第二终端一一对应,包括:所述多个解码参数与所述多个第二终端的连接信息一一对应。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,在所述第一终端根据从所述多个第二终端接收的数据,在所述第一终端上显示多个第一界面之后,所述方法还包括:
    所述第一终端接收用户对所述第一界面的窗口的第一操作;
    响应于所述第一操作,所述第一终端缩小.放大或关闭所述窗口,或切换焦点窗口。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,在所述第一终端根据从所述多个第二终端接收的数据,在所述第一终端上显示多个第一界面之后,所述方法还包括:
    所述第一终端接收用户对与所述第二终端对应的第一界面的第二操作;
    所述第一终端将所述第二操作的数据发送给所述第二终端,用于所述第二终端根 据所述第二操作显示第三界面。
  7. 根据权利要求6所述的方法,其特征在于,在所述第一终端将所述第二操作的数据发送给所述第二终端之后,所述方法还包括:
    所述第一终端从所述第二终端接收更新的数据;
    所述第一终端根据所述更新的数据,将所述第二终端对应的第一界面更新为第四界面,所述第四界面的内容是所述第三界面内容的镜像,或所述第四界面的内容与所述第三界面的部分内容相同。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述第一终端还与第三终端建立连接;所述方法还包括:
    所述第一终端将所述从所述多个第二终端接收的数据发送给所述第三终端,用于所述第三终端显示所述多个第一界面。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述第一终端接收来自所述第三终端的视频数据;
    所述第一终端在所述第一终端显示所述多个第一界面的同时,根据所述第三终端的视频数据在所述第一终端上显示视频通话画面。
  10. 根据权利要求8或9所述的方法,其特征在于,所述方法还包括:
    所述第一终端采集视频数据,发送给所述第三终端,用于所述第三终端在所述第三终端上显示所述多个第一界面的同时,显示视频通话画面。
  11. 一种投屏方法,其特征在于,应用于第二终端,所述第二终端与第一终端连接,所述方法包括:
    所述第二终端显示第二界面;
    所述第二终端接收用户操作;
    响应于所述用户操作,所述第二终端向所述第一终端发送所述第二界面的数据,用于所述第一终端显示与所述第二终端对应的第一界面,所述第一终端上还显示有与其他第二终端对应的第一界面;其中,所述第一界面的内容是对应第二终端显示的第二界面内容的镜像,或所述第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
  12. 根据权利要求11所述方法,其特征在于,所述用户操作为开始投屏的操作;
    在所述第二终端向所述第一终端发送所述第二界面的数据之前,所述方法还包括:
    所述第二终端获取所述第二界面的数据;
    其中,在所述第一界面的内容是所述第二界面内容的镜像的情况下,所述第二界面的数据为所述第二界面的录屏数据;在所述第一界面的内容与所述第二界面的部分内容相同的情况下,所述第二界面的数据为所述第二界面中预定元素所在图层的录屏数据。
  13. 根据权利要求12所述的方法,其特征在于,在所述第一界面的内容与所述第二界面的部分内容相同的情况下,在第二终端获取所述第二界面的数据之前,所述方法还包括:
    所述第二终端显示配置界面,所述配置界面包括图层过滤设置选项;
    所述第二终端接收用户对所述图层过滤设置选项的选中操作。
  14. 根据权利要求11所述方法,其特征在于,所述第二终端接收用户操作,包括:
    所述第二终端接收用户对所述第二界面或所述第二界面中元素的拖拽操作;
    在所述第二终端向所述第一终端发送所述第二界面的数据之前,所述方法还包括:
    所述第二终端确定用户的拖拽意图是跨设备拖拽;
    所述第二终端获取所述第二界面的数据。
  15. 根据权利要求14所述的方法,其特征在于,在接收到用户对所述第二界面中元素的拖拽操作的情况下,
    所述元素为视频组件,悬浮窗,画中画或自由小窗,所述第二界面的数据为所述元素所在图层的录屏数据;或,
    所述元素为所述第二界面中的用户界面UI控件,所述第二界面的数据为所述第二界面的指令流和所述UI控件的标识,或所述第二界面的数据为所述UI控件的绘制指令和标识。
  16. 一种投屏方法,其特征在于,应用于第一终端,所述方法包括:
    所述第一终端显示第一应用的界面;
    所述第一终端接收第一操作;
    响应于所述第一操作,所述第一终端向第二终端发送所述第一应用的界面的数据,用于所述第二终端显示第一界面,所述第一界面的内容是所述第一应用的界面内容的镜像,或所述第一界面的内容与所述第一应用的界面的部分内容相同;
    所述第一终端接收第二操作;
    响应于所述第二操作,所述第一终端显示第二应用的界面;
    所述第一终端接收第三操作;
    在所述第一终端向所述第二终端投射所述第一应用的界面的情况下,响应于所述第三操作,所述第一终端向第三终端发送所述第二应用的界面的数据,用于所述第三终端在显示第二界面,所述第二界面的内容是所述第二应用的界面内容的镜像,或所述第二界面的内容与所述第二应用的界面的部分内容相同。
  17. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    所述第一终端创建第一虚拟显示;
    所述第一终端将所述第一应用的界面或所述第一应用的界面中第一元素绘制到所述第一虚拟显示,以获取所述第一应用的界面的数据;
    所述第一终端创建第二虚拟显示;
    所述第一终端将所述第二应用的界面或所述第二应用的界面中第二元素绘制到所述第二虚拟显示,以获取所述第二应用的界面的数据。
  18. 根据权利要求16或17所述的方法,其特征在于,所述方法还包括:
    所述第一终端向所述第二终端发送所述第一应用的音频数据,用于所述第二终端输出对应音频;
    所述第一终端向所述第三终端发送所述第二应用的音频数据,用于所述第三终端输出对应音频。
  19. 根据权利要求18所述的方法,其特征在于,所述方法还包括:
    所述第一终端创建第一录音AudioRecord对象,基于所述第一AudioRecord对象 录制获得所述第一应用的音频数据;
    所述第一终端创建第二AudioRecord对象,基于所述第二AudioRecord对象录制获得所述第二应用的音频数据。
  20. 根据权利要求16-19中任一项所述的方法,其特征在于,所述第二终端与所述第三终端相同。
  21. 一种投屏方法,其特征在于,应用于第二终端,所述方法包括:
    所述第二终端接收来自第一终端的第一应用的界面的数据;
    所述第二终端显示第一界面,所述第一界面的内容是所述第一应用的界面内容的镜像,或所述第一界面的内容与所述第一应用的界面的部分内容相同;
    所述第二终端接收来自所述第一终端的第二应用的界面的数据;
    所述第二终端显示第三界面,所述第三界面包括所述第一界面的内容和第二界面的内容,所述第二界面的内容是所述第二应用的界面内容的镜像,或所述第二界面的内容与所述第二应用的界面的部分内容相同。
  22. 一种投屏装置,其特征在于,包括:处理器;用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令时使得所述投屏装置实现如权利要求1-10中任一项所述的方法,或者使得所述投屏装置实现如权利要求11-15中任一项所述的方法,或者使得所述投屏装置实现如权利要求16-20中任一项所述的方法,或者使得所述投屏装置实现如权利要求21所述的方法。
  23. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被电子设备执行时使得所述电子设备实现如权利要求1-10中任一项所述的方法,或者使得所述电子设备实现如权利要求11-15中任一项所述的方法,或者使得所述电子设备实现如权利要求16-20中任一项所述的方法,或者使得所述电子设备实现如权利要求21所述的方法。
  24. 一种投屏***,其特征在于,包括第一终端和多个第二终端;
    所述多个第二终端中的每个第二终端,用于显示第二界面;在接收到用户操作后,向所述第一终端发送所述第二界面的数据;
    所述第一终端,用于从所述多个第二终端中每个第二终端接收数据;根据从所述多个第二终端接收的数据,在所述第一终端上显示多个第一界面,所述多个第一界面与所述多个第二终端一一对应;
    其中,所述第一界面的内容是对应第二终端显示的第二界面内容的镜像,或所述第一界面的内容与对应第二终端显示的第二界面的部分内容相同。
PCT/CN2021/135158 2020-12-08 2021-12-02 一种投屏方法及设备 WO2022121775A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202011425441.8 2020-12-08
CN202011425441 2020-12-08
CN202110182037.0 2021-02-09
CN202110182037.0A CN114610253A (zh) 2020-12-08 2021-02-09 一种投屏方法及设备

Publications (1)

Publication Number Publication Date
WO2022121775A1 true WO2022121775A1 (zh) 2022-06-16

Family

ID=81857309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/135158 WO2022121775A1 (zh) 2020-12-08 2021-12-02 一种投屏方法及设备

Country Status (2)

Country Link
CN (1) CN114610253A (zh)
WO (1) WO2022121775A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134341A (zh) * 2022-06-27 2022-09-30 联想(北京)有限公司 显示方法和装置
CN116679895A (zh) * 2022-10-26 2023-09-01 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115052186B (zh) * 2022-07-12 2023-09-15 北京字跳网络技术有限公司 投屏方法及相关设备
CN117675993A (zh) * 2022-08-29 2024-03-08 Oppo广东移动通信有限公司 跨设备接续方法、装置、存储介质及终端设备
CN117896447A (zh) * 2022-10-08 2024-04-16 广州视臻信息科技有限公司 一种数据传输方法、电子设备、传屏器及存储介质
CN117156190A (zh) * 2023-04-21 2023-12-01 荣耀终端有限公司 投屏管理方法和装置
CN116434791B (zh) * 2023-06-12 2023-08-11 深圳福德源数码科技有限公司 一种用于音频播放器的配置方法及***

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6493008B1 (en) * 1999-02-19 2002-12-10 Canon Kabushiki Kaisha Multi-screen display system and method
CN102740155A (zh) * 2012-06-15 2012-10-17 宇龙计算机通信科技(深圳)有限公司 图像显示的方法及电子设备
JP2014044738A (ja) * 2013-11-05 2014-03-13 Seiko Epson Corp 画像表示装置が表示する分割画面に画像の割り当てを行う端末装置、端末装置の制御方法およびコンピュータープログラム
CN105516754A (zh) * 2015-12-07 2016-04-20 小米科技有限责任公司 画面显示控制方法、装置及终端
CN109275130A (zh) * 2018-09-13 2019-01-25 锐捷网络股份有限公司 一种投屏方法、装置及存储介质
CN109508162A (zh) * 2018-10-12 2019-03-22 福建星网视易信息***有限公司 一种投屏显示方法、***及存储介质
CN110191350A (zh) * 2019-05-28 2019-08-30 上海哔哩哔哩科技有限公司 多端投屏方法、计算机设备及存储介质
CN110515576A (zh) * 2019-07-08 2019-11-29 华为技术有限公司 显示控制方法及装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6493008B1 (en) * 1999-02-19 2002-12-10 Canon Kabushiki Kaisha Multi-screen display system and method
CN102740155A (zh) * 2012-06-15 2012-10-17 宇龙计算机通信科技(深圳)有限公司 图像显示的方法及电子设备
JP2014044738A (ja) * 2013-11-05 2014-03-13 Seiko Epson Corp 画像表示装置が表示する分割画面に画像の割り当てを行う端末装置、端末装置の制御方法およびコンピュータープログラム
CN105516754A (zh) * 2015-12-07 2016-04-20 小米科技有限责任公司 画面显示控制方法、装置及终端
CN109275130A (zh) * 2018-09-13 2019-01-25 锐捷网络股份有限公司 一种投屏方法、装置及存储介质
CN109508162A (zh) * 2018-10-12 2019-03-22 福建星网视易信息***有限公司 一种投屏显示方法、***及存储介质
CN110191350A (zh) * 2019-05-28 2019-08-30 上海哔哩哔哩科技有限公司 多端投屏方法、计算机设备及存储介质
CN110515576A (zh) * 2019-07-08 2019-11-29 华为技术有限公司 显示控制方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134341A (zh) * 2022-06-27 2022-09-30 联想(北京)有限公司 显示方法和装置
CN116679895A (zh) * 2022-10-26 2023-09-01 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***
CN116679895B (zh) * 2022-10-26 2024-06-07 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***

Also Published As

Publication number Publication date
CN114610253A (zh) 2022-06-10

Similar Documents

Publication Publication Date Title
WO2022121775A1 (zh) 一种投屏方法及设备
WO2020238871A1 (zh) 一种投屏方法、***及相关装置
CN111316598B (zh) 一种多屏互动方法及设备
CN110109636B (zh) 投屏方法、电子设备以及***
WO2021103846A1 (zh) 一种投屏音视频播放方法及电子设备
WO2022100237A1 (zh) 投屏显示方法及相关产品
CN112394895B (zh) 画面跨设备显示方法与装置、电子设备
WO2021078284A1 (zh) 一种内容接续方法及电子设备
CN112398855B (zh) 应用内容跨设备流转方法与装置、电子设备
JP7369281B2 (ja) デバイス能力スケジューリング方法および電子デバイス
WO2021023055A1 (zh) 一种视频通话的方法
CN112527174B (zh) 一种信息处理方法及电子设备
WO2022105445A1 (zh) 基于浏览器的应用投屏方法及相关装置
WO2021190466A1 (zh) 一种设备间多媒体内容续播的方法
WO2022048474A1 (zh) 一种多应用共享摄像头的方法与电子设备
WO2022017205A1 (zh) 一种显示多个窗口的方法及电子设备
WO2022143077A1 (zh) 一种拍摄方法、***及电子设备
WO2022135527A1 (zh) 一种视频录制方法及电子设备
CN112527222A (zh) 一种信息处理方法及电子设备
US20230403458A1 (en) Camera Invocation Method and System, and Electronic Device
WO2022042769A2 (zh) 多屏交互的***、方法、装置和介质
CN114040242A (zh) 投屏方法和电子设备
WO2023030099A1 (zh) 跨设备交互的方法、装置、投屏***及终端
JP7181990B2 (ja) データ伝送方法及び電子デバイス
WO2023005900A1 (zh) 一种投屏方法、电子设备及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21902482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21902482

Country of ref document: EP

Kind code of ref document: A1