WO2022100610A1 - 投屏方法、装置、电子设备及计算机可读存储介质 - Google Patents

投屏方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022100610A1
WO2022100610A1 PCT/CN2021/129765 CN2021129765W WO2022100610A1 WO 2022100610 A1 WO2022100610 A1 WO 2022100610A1 CN 2021129765 W CN2021129765 W CN 2021129765W WO 2022100610 A1 WO2022100610 A1 WO 2022100610A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen projection
content
screen
image
module
Prior art date
Application number
PCT/CN2021/129765
Other languages
English (en)
French (fr)
Inventor
张继平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022100610A1 publication Critical patent/WO2022100610A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • the present application belongs to the technical field of terminals, and in particular, relates to a screen projection method, apparatus, electronic device, and computer-readable storage medium.
  • Screen projection is a technology that transfers the screen image of one electronic device to the screen of another electronic device in real time for display.
  • the screen projection method is single, and there may be content that the user does not want to project on the projected screen.
  • the embodiments of the present application provide a screen projection method, device, electronic device, and computer-readable storage medium, which can solve the problem that the current screen projection solution has a single screen projection method, and the projected end may have content that the user does not want to project. question.
  • an embodiment of the present application provides a screen projection method, including:
  • the first device determines the object selected by the content selection operation in the first display interface as the screencast content, and the first display interface is the interface displayed on the current screen of the first device;
  • the first device processes the screen projection content to obtain a screen projection image
  • the first device sends the screen projection image to the second device, where the screen projection image is used for display on the screen of the second device.
  • the content selection operation is an operation performed by the user on the first device.
  • the user can select the content to be projected from the interface displayed on the current screen of the first device (ie, the first display interface) through the content selection operation.
  • the first device may determine the object selected by the content selection operation in the first display interface as screen-casting content.
  • the first device may process the screen projection content to obtain a screen projection image.
  • the first device may encode the screen projection image according to the preset screen projection protocol, and transmit the screen projection image to the second device in the form of encoded data.
  • the second device may decode the encoded data to obtain a screen projection image, and display the screen projection image on the screen of the second device to complete the screen projection operation.
  • the form of the content selection operation can be set according to actual needs.
  • the content selection operation may include any one or a combination of operations such as clicking, dragging, and long pressing.
  • the projected screen content is text and/or image.
  • the form of the above-mentioned screen projection content is set according to actual needs.
  • the first device may set the screen projection content as text, that is, the user may select text to perform screen projection.
  • the first device may set the screen projection content as an image, that is, the user may select an image to perform screen projection.
  • the first device may also set the screencast content as text and images.
  • the first device may also set the projected screen content as other types of objects.
  • the method further includes:
  • the first device detects whether a screen projection connection has been established
  • the first device If the first device does not establish a screen-casting connection, the first device performs a search operation and displays a first list, where the first list is used to display the searched electronic devices;
  • the first device determines a second device from the searched electronic devices in response to a selection operation on the first list
  • the first device establishes a screen projection connection with the second device.
  • the first device may detect whether a screencast connection has been established.
  • the first device may perform a search operation to search for surrounding electronic devices that can be screened, and display the searched electronic devices as a user in the form of a first list.
  • the user may perform a selection operation on the first list, and select the second device from the first list.
  • the first device determines the second device from the searched electronic devices according to the selection operation, and establishes a screen-casting connection with the second device.
  • the first device processes the screen projection content to obtain a screen projection image, including:
  • the first device synthesizes the target layer to obtain a screen projection image.
  • the first device when generating the screen projection image, can obtain the target layer where the screen projection content is located.
  • the content of the target layer can be regarded as content that is strongly associated with the projected content.
  • the first device may synthesize the target layer to obtain a screen projection image.
  • the projection image only contains projection content and content that is strongly related to the projection content, and there is no interference from other layers, which improves the accuracy of projection.
  • the first device processes the screen projection content to obtain a screen projection image, including:
  • the first device performs an interception operation on the displayed image according to the location information, and obtains intercepted graphic data of a target area, where the target area is an area where the screen projection content is located;
  • the first device synthesizes the intercepted graphic data to obtain a screen projection image.
  • the location information of the screen projection content and the image of the interface currently displayed on the screen of the first device ie, the display image of the first display interface
  • the location information of the screen projection content and the image of the interface currently displayed on the screen of the first device ie, the display image of the first display interface
  • the first device may determine the target area (that is, the area where the projected content is located) according to the location information.
  • the first device may intercept the graphic data of the target area in the display image to obtain the intercepted graphic data.
  • the first device may synthesize the intercepted graphic data to obtain a screen projection image.
  • the above method can improve the accuracy of the frame selection of the target area when performing screen projection, and avoid the existence of objects in the target area that the user does not want to project. content, thereby improving the accuracy of screen projection.
  • the first device processes the screen projection content to obtain a screen projection image, including:
  • the first device performs an interception operation on the target layer according to the location information to obtain regional graphic data of the target area, where the target area is the area where the screen projection content is located;
  • the first device synthesizes the area graphic data to obtain a screen projection image.
  • the first device may first obtain the target layer where the screen projection image is located and the location information of the screen projection content.
  • the first device may perform an interception operation on the target layer according to the above-mentioned position information to obtain the regional graphic data of the target area.
  • the above-mentioned target layer may be one layer, or may be multiple layers.
  • the above-mentioned area graphic data may be one or more.
  • the above-mentioned synthesizing the area graphic data refers to synthesizing the area graphic data corresponding to one or more target layers into a projected screen image.
  • the projection image generated by the above method can ensure that only the selected projection content exists in the projection image, and the projection content will not be blocked by the contents of other irrelevant layers, thereby improving the projection accuracy.
  • the first device processes the screen projection content to obtain a screen projection image, including:
  • the first device acquires the screen projection configuration information of the second device, and typesets the screen projection content according to the screen projection configuration information to obtain the first content;
  • the first device renders the graphics data corresponding to the first content to obtain a screen projection image.
  • the first device when the first device generates the screen projection image, it may first obtain the screen projection configuration information of the second device, and then typeset the screen projection content according to the screen projection configuration information to obtain the first content.
  • the first device may acquire graphics data corresponding to the first content, render and synthesize the graphics data corresponding to the first content, and obtain a screen projection image.
  • the first device acquires screen projection configuration information of the second device, typeset the screen projection content according to the screen projection configuration information, and obtains the first content, including :
  • the first device acquires the screen projection configuration information of the second device, and typesets the screen projection content according to the screen projection configuration information to obtain the typesetting screen projection content;
  • the first device sets the typeset projected screen content as the first content.
  • the first device may only perform screen projection on the screen projection content determined this time.
  • the first device may typeset the projected screen content according to the projected screen configuration information, and obtain the typeset projected screen content.
  • the first device sets the typeset projected screen content as the first content, and clears, overwrites or replaces the previously generated first content.
  • the first device acquires screen projection configuration information of the second device, typeset the screen projection content according to the screen projection configuration information, and obtains the first content, including :
  • the first device acquires the screen projection configuration information of the second device, and typesets the screen projection content according to the screen projection configuration information to obtain the typesetting screen projection content;
  • the first device combines the last generated first content and the typeset projected screen content to obtain new first content.
  • the first device may also combine the screen projection content determined this time with the first content generated last time for screen projection.
  • the first device may typeset the projected screen content according to the projected screen configuration information, and obtain the typeset projected screen content.
  • the first device combines the last generated first content and the typesetting screencast content to obtain new first content.
  • the screencast content after typesetting may be continued at the end of the last generated first content to obtain new first content.
  • the screen projection configuration information includes a screen resolution of the second device and/or a screen size of the second device.
  • the above screen projection configuration information may include any one or a combination of information such as the screen resolution of the second device, the screen size, font, color, and font size of the second device.
  • an embodiment of the present application provides a screen projection device, including:
  • a content selection module configured to determine the object selected by the content selection operation in the first display interface as screen-casting content, where the first display interface is the interface displayed on the current screen of the first device;
  • an image generation module configured to process the projected screen content to obtain a projected screen image
  • An image sending module configured to send the screen projection image to the second device, where the screen projection image is used for display on the screen of the second device.
  • the screen projection content is text and/or image.
  • the apparatus further includes:
  • the connection detection module is used to detect whether a screen projection connection has been established
  • a device search module configured to perform a search operation if the first device has not established a screen-casting connection, and display a first list, where the first list is used to display the searched electronic devices;
  • a device selection module configured to determine a second device from the searched electronic devices in response to a selection operation on the first list
  • a connection establishment module configured to establish a screen projection connection with the second device.
  • the image generation module includes:
  • a target layer submodule used to obtain the target layer where the screencast content is located
  • the layer synthesis sub-module is used for synthesizing the target layer to obtain a screen projection image.
  • the image generation module includes:
  • a location acquisition submodule configured to acquire the display image of the first display interface and the location information of the projected screen content
  • a position interception sub-module configured to intercept the displayed image according to the position information, to obtain intercepted graphic data of a target area, where the target area is the area where the projected screen content is located;
  • the interception and synthesis sub-module is used to synthesize the intercepted graphic data to obtain a screen projection image.
  • the image generation module includes:
  • a target acquisition sub-module configured to acquire the target layer where the screencast content is located and the location information of the screencast content
  • an area interception sub-module configured to perform an interception operation on the target layer according to the position information to obtain the area graphic data of the target area, where the target area is the area where the screen projection content is located;
  • the image synthesis sub-module is used for synthesizing the graphic data of the area to obtain a screen projection image.
  • the image generation module includes:
  • a content typesetting sub-module configured to obtain the screen projection configuration information of the second device, and type the screen projection content according to the screen projection configuration information to obtain the first content
  • the image rendering sub-module is used for rendering the graphics data corresponding to the first content to obtain a screen projection image.
  • the content typesetting submodule includes:
  • the first typesetting sub-module is used to obtain the screen projection configuration information of the second device, and typeset the screen projection content according to the screen projection configuration information, so as to obtain the typesetting screen projection content;
  • the content setting sub-module is used for setting the typesetting content of the projected screen as the first content.
  • the content typesetting submodule includes:
  • the second typesetting sub-module is configured to obtain the screen projection configuration information of the second device, and perform typesetting on the screen projection content according to the screen projection configuration information, so as to obtain the typesetting screen projection content;
  • the content merging sub-module is configured to combine the last generated first content and the typesetting screencast content to obtain new first content.
  • the screen projection configuration information includes a screen resolution of the second device and/or a screen size of the second device.
  • an electronic device including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, the electronic device realizes the steps of the above method.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, enables an electronic device to implement the steps of the above method.
  • a chip system in a fifth aspect, is provided, the chip system may be a single chip or a chip module composed of multiple chips, the chip system includes a memory and a processor, and the processor executes the storage in the memory.
  • the first device determines the projection content in response to a user's content selection operation. After that, the first device processes the screen projection content to obtain a screen projection image, and sends the screen projection image to the second device.
  • the first device When the first device performs screen projection through the above-mentioned screen projection method, the first device does not project the entire content of the first display interface to the second device, but performs targeted projection according to the selected projection content.
  • the screen projection method is flexible, avoiding content that users do not want to project, and has strong ease of use and practicability.
  • FIG. 1 is a schematic structural diagram of a screen projection system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 16 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 17 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 21 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 22 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 23 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 24 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 25 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 26 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 27 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 28 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 29 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 30 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 31 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 32 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • 35 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • 36 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 38 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 39 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 40 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 41 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 42 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 43 is a schematic diagram of another application scenario provided by an embodiment of the present application.
  • FIG. 45 is a schematic diagram of an electronic device provided by an embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • the screen projection method provided by the embodiments of the present application can be applied to mobile phones, tablet computers, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and super mobile personal computers
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • Screen projection refers to the technology of transferring the screen image of one electronic device to the screen of another electronic device in real time for display.
  • These screen projection solutions can synthesize and encode the complete screen image of the projection end (the electronic device that initiates the projection, also known as the source end, Source end), and then send the encoded image to the projection end ( The electronic device that responds to the projection screen can also be called the receiving end, the sink end).
  • the Sink side decodes the encoded image and displays it on the screen in a mirrored manner.
  • the embodiments of the present application provide a screen projection method, device, electronic device, and computer-readable storage medium, which can make the screen projection area accurately cover the content that the user wants to project, and solve the problem of existing screen projection.
  • the screen projection method is single, and there may be a problem that the user does not want to project the content of the screen, which has strong ease of use and practicability.
  • the screen projection system is a system to which the screen projection method provided by the embodiment of the present application is applicable.
  • the screen projection system includes at least one first device 101 (only one is shown in FIG. 1 ) and at least one second device 102 (only one is shown in FIG. 1 ).
  • the first device 101 is an electronic device that initiates screen projection
  • the second device 102 is an electronic device that responds to screen projection.
  • Both the above-mentioned first device 101 and the above-mentioned second device 102 are provided with a wireless communication module.
  • the first device 101 can establish a screen projection connection 103 with the wireless communication module of the second device 102 through the wireless communication module of the device and a preset screen projection protocol.
  • the preset screen projection protocol can be any one of the AirPlay protocol, Miracast protocol, WiDi protocol, Digital Living Network Alliance (DIGITAL LIVING NETWORK ALLIANCE, DLNA) protocol and other screen projection protocols.
  • the preset screencasting protocol may also be a screencasting protocol customized by the manufacturer.
  • the embodiment of the present application does not limit the specific type of the screen projection protocol.
  • the first device 101 may transmit the content that the user wants to share to the second device 102 in response to the user's operation.
  • the second device 102 After receiving the content transmitted by the first device 101, the second device 102 displays the above-mentioned content on the screen of the second device 102 to complete the screen projection operation.
  • the user may perform a content selection operation.
  • the first device may determine the screen projection content in response to the user's content selection operation, and perform the screen projection operation.
  • the form of the above content selection operation can be set according to the actual scene.
  • the content selection operation may include any one or a combination of gesture operations such as long-pressing the screen, tapping the screen, and sliding the screen.
  • the content selection operation may include any one or a combination of operations such as clicking on the accessory device and dragging the accessory device.
  • the above-mentioned screen projection content may include any one or a combination of objects such as text, pictures, and web page controls.
  • the first device is a mobile phone.
  • the user wants to share the text in the current display interface of the mobile phone, the user can operate the screen of the mobile phone and long press the content of the text part.
  • the mobile phone can display a content selection box and operation options in response to the user's long-pressing operation.
  • the content selection box is the diagonally filled area in Figure 2.
  • the operation options can include options such as "Copy”, “Share”, “Select All”, and "Screencast”.
  • the mobile phone when the user drags the content selection box, the mobile phone can expand the coverage area of the content selection box to both sides in response to the user's drag operation on the content selection box, and select the corresponding text.
  • the mobile phone may, in response to the user's click operation on the "screen projection” option in the operation options, determine the text "test text test" currently framed in the content selection box as the screen projection content.
  • the first device is a desktop computer
  • the desktop computer is equipped with a mouse.
  • the desktop computer can display the content selection box in response to the user's click and drag operations on the mouse, adjust the coverage area of the content selection box (that is, the area filled with slashes in Figures 7 and 8), and frame the corresponding text .
  • the desktop computer displays operation options in response to the user's click operation.
  • the operation options can include options such as "Copy”, “Share”, “Select All”, and “Screencast”.
  • the desktop computer determines the text "test text test” currently framed in the content selection box as the screen projection content.
  • the first device is a mobile phone.
  • the user wants to share the image in the current display interface of the mobile phone, the user can long press the image that he wants to share.
  • the mobile phone can display operation options in response to the user's long-press operation.
  • the operation options may include options such as “download image”, “share image”, “project image” and so on.
  • the user can click the “project image” option in the operation options.
  • the mobile phone can determine the above-mentioned selected image as the screen-casting content in response to the user's click operation on the "screen projection image” option in the operation options.
  • the first device may detect whether the screen-casting connection has been established with the second device.
  • the first device may detect that the device has established a screen-casting connection with the second device. At this time, the first device can skip the operation of establishing a screen-casting connection with the second device.
  • the first device may detect that the device has not established a screen-casting connection with the second device. At this time, the first device may perform a search operation, find surrounding electronic devices that can be screened, and display a device list of the searched electronic devices (ie, the above-mentioned first list).
  • the user can intuitively view the surrounding electronic devices that can be screened through the device list. Then, the user may perform a selection operation on the above-mentioned device list, and select a second device from each electronic device displayed in the device list.
  • the first device may determine the second device in response to the above-mentioned selection operation, and send a screen projection request to the second device.
  • the second device may approve the screen projection request or reject the screen projection request according to preset response rules; or, the second device may also respond to the user's operation on the second device, Agree to the screencasting request or reject the screencasting request.
  • the first device establishes a screen projection connection with the second device, and the first device creates a virtual display window.
  • the virtual display window is used to manage the projected image.
  • the first device stops the screen projection operation, or the first device may re-display the above-mentioned list, so that the user can re-select the second device.
  • the preset response rules can be set according to actual scenarios. For example, in some embodiments, a preset response rule may be set as: rejecting screencasting requests sent by electronic devices in the blacklist, and defaulting to screencasting requests sent by electronic devices outside the blacklist. Alternatively, in other embodiments, the preset response rules may also be set to other conditions. The specific conditions of the preset response rules are not limited in this embodiment of the present application.
  • the first device is a mobile phone.
  • the mobile phone displays operation options in response to the user's long-pressing operation on an image in the current display interface.
  • the operation options may include options such as "download image”, “share image”, “project image” and so on.
  • the user wants to project the selected image to other electronic devices, and clicks the option of "Project Image”. Then, the mobile phone determines the above-mentioned selected image as the screen-casting content in response to the user's click operation on the "screen projection image” option in the operation options.
  • the mobile phone detects whether the device has established a screen-casting connection with the second device.
  • the mobile phone when the mobile phone detects that the device has not established a screen-casting connection with the second device, the mobile phone performs a search operation to search for surrounding electronic devices that can be screen-cast, and displays a search interface.
  • the mobile phone After completing the search operation, the mobile phone generates a device list, and displays the device list on the current display interface of the mobile phone screen.
  • the device list includes the identifiers of the searched electronic devices.
  • the device list includes three searched electronic devices, "smart TV”, “laptop computer” and “desktop computer”. After viewing the above list, the user clicks the option of "Smart TV”, and wants to cast the above screencast content to the smart TV.
  • the mobile phone sends a screen projection request to the smart TV in response to the user's click operation on the “smart TV” option.
  • the smart TV After receiving the screencasting request, the smart TV detects that the mobile phone sending the screencasting request is not in the blacklist, and then agrees to the screencasting request by default and establishes a screencasting connection with the mobile phone.
  • the smart TV can also display a prompt box on the screen of the smart TV, and ask the user whether to agree to screen projection in the prompt box. If the user clicks the "Yes" option in the prompt box, the smart TV agrees to the above screencasting request and establishes a screencasting connection with the mobile phone. If the user clicks the "No" option in the prompt box, the smart TV rejects the above screen projection request and returns an error message to the mobile phone.
  • the first device may generate a screen-cast image according to the screen-cast content, and transmit the screen-cast image to the second device for display through the above-mentioned screen-cast connection.
  • the first device may acquire the selected screencast content through a preset control. For example, when the selected screencast content is text, the first device can obtain the selected text through the text control; when the selected screencast content is an image, the first device can obtain the selected text through the image control image.
  • the first device may typeset the above-mentioned screencasting content according to the screencasting configuration information of the second device to obtain the first content.
  • the content of the above screen projection configuration information can be set according to actual needs.
  • the above-mentioned screen projection configuration information may include one or more of information such as screen resolution, screen size, font, font size, and color of the second device.
  • the acquisition timing of the above-mentioned screen projection configuration information can also be set according to actual needs.
  • the first device may obtain the screen-casting configuration information of the second device immediately after establishing a screen-casting connection with the second device; or, in other embodiments, the first device may also When the screencast content is typeset, the screencasting configuration information of the second device is obtained; or, in other embodiments, the first device may also obtain screencasting configuration information of the second device at other times.
  • This embodiment of the present application does not limit the timing at which the first device acquires the screen projection configuration information of the second device.
  • the first device may render and synthesize graphics data corresponding to the first content to obtain a screen projection image.
  • the first device may encode the screen projection image according to the preset screen projection protocol to obtain encoded data, and transmit the encoded data to the second device through the above screen projection connection.
  • the second device After the second device receives the encoded data, it decodes the encoded data according to the preset screen projection protocol to obtain a projected screen image, and displays the projected screen image on the screen of the second device to complete the screen projection operation.
  • the first device is a mobile phone
  • the second device is a smart TV
  • the operating system of the mobile phone is an Android operating system.
  • the slash-filled part is the content selection box
  • the text and images in the content selection box are the screencast content selected by the user.
  • the mobile phone can create a virtual display window (display) and an activity management (Activity) corresponding to the virtual display window through the screen projection service.
  • the screen projection service passes the selected text to the Activity through the text control (TextView), and passes the selected image to the Activity through the image control (ImageView).
  • the Activity can obtain the screen projection configuration information of the second device through a Local Area Network (LAN) service.
  • the screen projection configuration information may include information such as the screen size of the second device, the display resolution of the second device, and the like.
  • Activity adjusts the window size of the virtual display window according to the screen projection configuration information of the second device, and typesets the acquired screen projection content according to the screen projection configuration information of the second device, and adjusts the positions of text and images, Size, etc., so as to get the first content after typesetting.
  • the Activity can transmit the graphics data of the first content to a graphics processor (Graphics Processing Unit, GPU) for rendering, and transmit the rendered graphics data to the virtual display window.
  • a graphics processor Graphics Processing Unit, GPU
  • the virtual display window transmits the rendered graphics data to the window synthesis service (SurfaceFlinger), and the window synthesis service synthesizes the rendered graphics data to obtain a screen projection image.
  • the window synthesis service (SurfaceFlinger)
  • the screen projection service can encode the screen projection image according to the preset screen projection protocol, obtain encoded data, and transmit the encoded data to the smart TV through the above screen projection connection.
  • the smart TV decodes the encoded data according to the preset screen projection protocol, obtains a screen projection image, and displays the screen projection image on the screen of the smart TV to complete this screen projection operation.
  • the first device may acquire the target layer where the screencast content is located.
  • the interface currently displayed on the first device is synthesized by layer a corresponding to application A, layer b corresponding to application B, and layer c corresponding to application C, and the selected screencast content is application B and the content of application C, the first device can obtain layer b and layer c first.
  • the first device may perform a composite operation on the target layer to obtain a screen projection image.
  • the first device is a mobile phone
  • the second device is a smart TV
  • the operating system of the mobile phone is an Android operating system.
  • the interface currently displayed on the mobile phone is the content shown in FIG. 18 .
  • the interface currently displayed on the mobile phone is synthesized by layer a, layer b, and layer c.
  • the mobile phone can determine the screen projection content (that is, the area filled with oblique lines in FIG. 20 ) from the interface currently displayed on the mobile phone in response to the user's operation.
  • the mobile phone can create a virtual display window (display) and an activity management (Activity) corresponding to the virtual display window through the screen projection service.
  • the screencasting service can deliver the target layer (ie, layer b) corresponding to the screencast content to the Activity.
  • Activity passes layer b to the virtual display window.
  • the virtual display window transmits the layer b to the window composition service (SurfaceFlinger), and the window composition service synthesizes the layer b to obtain the screen projection image.
  • the screen projection service can encode the screen projection image according to the preset screen projection protocol, obtain encoded data, and transmit the encoded data to the smart TV through the above screen projection connection.
  • the smart TV decodes the encoded data according to the preset screen projection protocol, obtains a screen projection image, and displays the screen projection image on the screen of the smart TV to complete this screen projection operation.
  • the screen projection image only includes the screen projection content and the content strongly related to the screen projection content, and there is no interference from irrelevant layers, which can prevent the screen projection content from being blocked, and improves the performance of the screen projection. Projection accuracy.
  • the first device may acquire the display image of the interface currently displayed by the first device and the location information of the projected screen content.
  • the first device may determine, according to the location information of the projected content, an area where the projected content is located, that is, the target area.
  • the first device may perform a clipping operation on the display image to clip graphics data of the target area of the display image, that is, clipping graphics data.
  • the first device synthesizes the above-mentioned intercepted graphic data to obtain a screen projection image.
  • the first device is a mobile phone
  • the second device is a smart TV
  • the operating system of the mobile phone is the Android operating system
  • the interface currently displayed on the mobile phone is the content shown in Figure 18.
  • the mobile phone in response to the user's operation, can determine the screen projection content from the interface currently displayed on the mobile phone (ie, the area filled with oblique lines in FIG. 20 ).
  • the mobile phone can create a virtual display window (display) and an activity management (Activity) corresponding to the virtual display window through the screen projection service.
  • the screen projection service can transmit the display image of the interface currently displayed on the mobile phone and the location information corresponding to the screen projection content to the Activity.
  • the Activity After acquiring the position information corresponding to the displayed image and the screen projected content, the Activity performs a capture operation on the displayed image according to the above-mentioned position information, and obtains the captured graphic data of the target area.
  • the Activity will pass the intercepted graphics data to the virtual display window.
  • the virtual display window transmits the intercepted graphics data to the window synthesis service (SurfaceFlinger), and the window synthesis service synthesizes the regional graphics data to obtain a screen projection image.
  • the screen projection service can encode the screen projection image according to the preset screen projection protocol, obtain encoded data, and transmit the encoded data to the smart TV through the above screen projection connection.
  • the smart TV decodes the encoded data according to the preset screen projection protocol, obtains a screen projection image, and displays the screen projection image on the screen of the smart TV to complete this screen projection operation.
  • the target area is automatically determined according to the position information of the screen projection content, and is not manually selected by the user. Therefore, the projected image only includes content in the area where the projected content is located, and does not include content outside the area where the projected content is located, so as to avoid projecting content that the user does not want to share to the second device.
  • the first device may first acquire the target layer where the selected screencast content is located.
  • the first device can obtain layer b first.
  • the first device may acquire the location information of the selected screen projection content, and perform an interception operation on the target layer according to the location information of the screen projection content to obtain regional graphic data of the target area.
  • the target area is the area where the projected content is located.
  • the first device may synthesize the regional graphics data to obtain a screen projection image.
  • the first device may encode the screen projection image according to the preset screen projection protocol to obtain encoded data, and transmit the encoded data to the second device through the above screen projection connection.
  • the second device After the second device receives the encoded data, it decodes the encoded data according to the preset screen projection protocol to obtain a projected screen image, and displays the projected screen image on the screen of the second device to complete the screen projection operation.
  • the first device is a mobile phone
  • the second device is a smart TV
  • the operating system of the mobile phone is an Android operating system.
  • the interface currently displayed on the mobile phone is the content shown in FIG. 18 .
  • the interface currently displayed on the mobile phone is synthesized by layer a, layer b, and layer c.
  • the mobile phone can determine the screen projection content (that is, the area filled with oblique lines in FIG. 20 ) from the interface currently displayed on the mobile phone in response to the user's operation.
  • the mobile phone can create a virtual display window (display) and an activity management (Activity) corresponding to the virtual display window through the screen projection service.
  • the screencasting service can transmit the target layer (ie, layer b) corresponding to the screencast content and the location information corresponding to the screencast content to the Activity.
  • target layer ie, layer b
  • Activity After acquiring the location information corresponding to layer b and the projected screen content, Activity performs an interception operation on layer b according to the above location information, and obtains the regional graphics data of the target area.
  • Activity passes regional graphics data to the virtual display window.
  • the virtual display window transmits the regional graphics data to the window synthesis service (SurfaceFlinger), and the window synthesis service synthesizes the regional graphics data to obtain the projected screen image.
  • the window synthesis service (SurfaceFlinger)
  • the screen projection service can encode the screen projection image according to the preset screen projection protocol, obtain encoded data, and transmit the encoded data to the smart TV through the above screen projection connection.
  • the smart TV decodes the encoded data according to the preset screen projection protocol, obtains a screen projection image, and displays the screen projection image on the screen of the smart TV to complete this screen projection operation.
  • the user After the user completes the screen projection operation, if a new screen projection operation needs to be performed, the user can perform the content selection operation again, and the first device determines the new screen projection content according to the user's content selection operation.
  • the first device may refer to the content described in Section 3, perform image capture and synthesis operations according to the new screen projection content, and generate a new screen projection image.
  • the first device may also generate new first content according to the new screen projection content, and determine a new screen projection image according to the new first content.
  • the first device when the first device is generating new first content, the first device can replace, overwrite or clear the first content generated by the previous screencasting operation, and only the screencasting content selected this time can be displayed. to be processed.
  • the first device may typeset the screencast content selected this time according to the screencasting configuration information of the second device, and obtain the first content corresponding to the screencasting operation this time.
  • the first device renders and synthesizes the new first content to obtain a new screen projection image, and transmits the new screen projection image to the second device for display through the above-mentioned screen projection connection.
  • the first device is a mobile phone.
  • the mobile phone can display the operation options.
  • the operation options can include options such as "Copy”, “Share”, “Select All”, “New Screencast”, and "Merge Screencast”.
  • the mobile phone may, in response to the user's operation, clear the first content generated by the previous screencasting operation.
  • the mobile phone obtains the selected text through the text control, obtains the selected image through the image control, and typesets the currently selected text and image according to the screen projection configuration information of the second device to obtain this The first content corresponding to the secondary projection screen.
  • the mobile phone can render the graphics data of the new first content, synthesize the rendered graphics data into a new screen projection image, and send the new screen projection image to the second device at the opposite end for display.
  • the first device when the first device is generating new first content, the first device may also not clear the first content of the previous screen projection.
  • the first device may typeset the first content of the previous screen projection operation and the screen projection content selected this time according to the screen projection configuration information of the second device to obtain new first content.
  • the first device can also perform the screencasting content selected this time according to the screencasting configuration information of the second device.
  • the first device After obtaining the new first content, the first device renders and synthesizes the new first content to obtain a new screen projection image, and transmits the new screen projection image to the second device for display through the above screen projection connection.
  • the mobile phone can display the operation options.
  • the operation options can include options such as "Copy”, “Share”, “Select All”, “New Screencast”, and "Merge Screencast”.
  • the mobile phone may not clear the first content of the previous screen projection in response to the user's operation.
  • the mobile phone obtains the selected text through the text control, and typesets the selected text and image according to the projection configuration information of the second device to obtain the typesetting content of the projection screen.
  • the mobile phone can combine the typeset projected screen content with the previous first content, and place the typeset projected screen content at the end of the last first content to obtain new first content.
  • the mobile phone After obtaining the new first content, the mobile phone renders the graphics data of the new first content, and synthesizes the rendered graphics data into a new screen projection image.
  • the user may perform the end screencasting operation on the first device.
  • the first device detects the end of the screen projection operation, the first device disconnects the screen projection connection with the second device, and ends the screen projection.
  • the second device may cancel displaying the screencasting image, or the second device may continue to display the screencasting image.
  • the first device is a mobile phone and the second device is a smart TV.
  • a screen-casting connection is established between the mobile phone and the smart TV.
  • the operation bar of the mobile phone may include operation options such as "Wireless Network”, “Bluetooth”, “Mobile Data”, “Mute”, “Wireless Screen Casting”.
  • the user can click the option of "Wireless Screen Casting".
  • the screencasting function is turned off, the screencasting service is stopped, and the screencasting connection with the second device is disconnected.
  • the smart TV When the smart TV detects that the projection connection is disconnected, the smart TV can continue to display the last received projection image, or the smart TV can cancel the display of the last received projection image and display the standby interface.
  • the user may also perform the end screen projection operation on the second device.
  • the second device detects that the screen projection operation is ended, the second device disconnects the screen projection connection with the first device. After disconnecting the screencasting connection, the second device may cancel displaying the screencasting image, or the second device may continue to display the screencasting image.
  • the first device is a mobile phone and the second device is a smart TV.
  • a screen-casting connection is established between the mobile phone and the smart TV, and the smart TV is equipped with a remote control.
  • the user can press the button on the remote control of the smart TV to end the screencasting, and the remote control of the smart TV sends a signal to end the screencasting to the smart TV.
  • the smart TV When the smart TV receives the signal to end the screencasting, it disconnects the screencasting connection with the mobile phone and displays the standby interface.
  • the mobile phone When the mobile phone detects that the screen projection connection is disconnected, the mobile phone can perform a preset prompt operation and display a pop-up window. This pop-up window is used to inform the user that the screencasting connection has been disconnected. Alternatively, the mobile phone may also perform a search operation again, and after the search operation is completed, display a device list, so that the user can select a new second device according to the device list.
  • the first device and/or the second device may also be set with a determination rule for ending the screen projection operation.
  • the first device or the second device can automatically end the screen projection operation and disconnect the screen projection connection.
  • the above judgment rules can be set according to actual needs.
  • the above-mentioned decision rule may be related to the length of idle time.
  • the idle time is the time interval between the current time and the triggering time, and the triggering time is the last time the first device sent the screencast image or the last time the second device received the screencast image.
  • the first device or the second device detects that the idle duration is greater than or equal to the preset duration threshold, the first device or the second device can automatically end the screen projection operation and disconnect the screen projection connection.
  • the above determination rule may be related to the number of screen projections.
  • the number of screencasts is the number of screencast images sent by the first device or the number of screencast images received by the second device. When the first device or the second device detects that the number of screen projections is greater than or equal to the preset number of times threshold, the first device or the second device can automatically end the screen projection operation and disconnect the screen projection connection.
  • the first device is a mobile phone.
  • the mobile phone does not establish a screen-casting connection with other electronic devices.
  • the user wants to project part of the text in the mobile phone to the smart TV. At this point, the user can long press the screen of the mobile phone.
  • the mobile phone In response to the user's long-pressing operation, the mobile phone displays a content selection box and operation options.
  • the operation options include four options: “Copy”, “Share”, “Select All”, and “Project Screen”.
  • the user drags the adjustment cursors on both sides of the content selection box to adjust the coverage area of the content selection box so that the content selection box covers the four words "test text".
  • the mobile phone determines the content selected by the content selection operation in the current display interface (ie, the first display interface) of the mobile phone as the screen-casting content.
  • the mobile phone detects whether a screen-casting connection has been established with other electronic devices.
  • the mobile phone detects that no screen-casting connection is currently established with other electronic devices, and the mobile phone performs a search operation and displays the first list.
  • the first list is the device list of the electronic devices that can be screencast and searched by the mobile phone.
  • the first list includes three searched electronic devices, namely "smart TV”, “laptop computer” and “tablet computer”. After viewing the device list, the user may perform a selection operation, and select "smart TV” from the first list as the second device.
  • the mobile phone can determine the second device from the scanned electronic devices in response to the user's selection operation on the first list, and set the smart TV as the second device.
  • the mobile phone sends a screen projection request to the smart TV.
  • the smart TV detects that the mobile phone is a trusted device, agrees to the screencasting request, and establishes a screencasting connection between the mobile phone and the smart TV.
  • the mobile phone obtains the screen-casting configuration information of the smart TV through the screen-casting connection, and typesets the above-mentioned screen-casting content according to the screen-casting configuration information to obtain the first content.
  • the mobile phone After obtaining the first content, the mobile phone renders the graphics data corresponding to the first content through a graphics processor, and synthesizes the rendered graphics data through a window synthesizer to obtain a screen projection image.
  • the mobile phone encodes the projected screen image according to the preset screen projection protocol, obtains encoded data, and sends the encoded data to the smart TV.
  • the smart TV decodes the encoded data according to the preset screen projection protocol, obtains a screen projection image, and displays the screen projection image on the screen of the device.
  • the user is in the process of browsing the new display interface and wants to share the new content to the smart TV.
  • the user can re-execute the content selection operation to determine the new projection content (ie, the image selected in the area filled with slanted lines in FIG. 39 ).
  • the handset may display operation options in response to the user's content selection operation.
  • the operation options provided by the mobile phone include five options: “Copy”, “Share”, “Select All”, “New Screencast”, and “Merge Screencast”.
  • the phone When the user clicks the "Merge screencast” option, the phone will use the text and image selected in the current content selection box as the new screencast content. Moreover, the mobile phone detects that a screen-casting connection has been established with the smart TV, so the mobile phone skips the process of establishing a screen-casting connection.
  • the mobile phone typesets the new projection content according to the projection configuration information of the smart TV, merges the typesetting projection content with the first content of the previous projection operation, and combines the typesetting projection content with the first content of the previous projection operation.
  • the content is continued at the end of the first content of the last screen projection operation, and a new first content is obtained.
  • the mobile phone renders the graphics data of the new first content through the graphics processor, and synthesizes the rendered graphics data through the window synthesizer to obtain a new screen projection image.
  • the mobile phone encodes the new screen projection image according to the preset screen projection protocol, obtains new encoded data, and sends the new encoded data to the smart TV.
  • the smart TV After receiving the new encoded data, the smart TV decodes the new encoded data according to the preset projection protocol to obtain a new projection image, replaces the projection image received last time with the new projection image, and converts the new projection image to the screen.
  • the projected image of the screen is displayed on the screen of this device.
  • the user can select the content to be projected by himself through the content selection operation, and the first device responds to the user's content selection operation to determine the projection content, and according to the projection content Generate a screencast image.
  • the screen projection image generated by the first device only includes the content selected by the content selection operation, and does not include other unselected content.
  • the user can perform screen projection in a targeted manner, so as to avoid content that the user does not want to project in the screen projection image. .
  • the screen projection method provided by this embodiment includes:
  • the first device determines the object selected by the content selection operation in the first display interface as the screencast content, and the first display interface is the interface displayed on the current screen of the first device;
  • the first device processes the screen projection content to obtain a screen projection image
  • the first device sends a screen projection image to the second device, and the screen projection image is used for display on the screen of the second device.
  • FIG. 45 is a schematic diagram of another electronic device provided by an embodiment of the present application.
  • the electronic device 4500 may include a processor 4510, an external memory interface 4520, an internal memory 4521, a universal serial bus (USB) interface 4530, a charge management module 4540, a power management module 4541, a battery 4542, an antenna 1, an antenna 2 , mobile communication module 4550, wireless communication module 4560, audio module 4570, speaker 4570A, receiver 4570B, microphone 4570C, headphone jack 4570D, sensor module 4580, key 4590, motor 4591, indicator 4592, camera 4593, display screen 4594, and Subscriber identification module (subscriber identification module, SIM) card interface 4595 and so on.
  • SIM Subscriber identification module
  • the sensor module 4580 may include a pressure sensor 4580A, a gyroscope sensor 4580B, an air pressure sensor 4580C, a magnetic sensor 4580D, an acceleration sensor 4580E, a distance sensor 4580F, a proximity light sensor 4580G, a fingerprint sensor 4580H, a temperature sensor 4580J, a touch sensor 4580K, and ambient light.
  • Sensor 4580L Bone Conduction Sensor 4580M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 4500 .
  • the electronic device 4500 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 4510 may include one or more processing units, for example, the processor 4510 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 4510 for storing instructions and data.
  • the memory in processor 4510 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 4510. If the processor 4510 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 4510 is reduced, thereby improving the efficiency of the system.
  • the processor 4510 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 4510 may contain multiple sets of I2C buses.
  • the processor 4510 can be respectively coupled to the touch sensor 4580K, charger, flash, camera 4593, etc. through different I2C bus interfaces.
  • the processor 4510 can couple the touch sensor 4580K through the I2C interface, so that the processor 4510 communicates with the touch sensor 4580K through the I2C bus interface, so as to realize the touch function of the electronic device 4500.
  • the I2S interface can be used for audio communication.
  • the processor 4510 may contain multiple sets of I2S buses.
  • the processor 4510 can be coupled with the audio module 4570 through the I2S bus to implement communication between the processor 4510 and the audio module 4570.
  • the audio module 4570 can transmit audio signals to the wireless communication module 4560 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 4570 and the wireless communication module 4560 may be coupled through a PCM bus interface.
  • the audio module 4570 can also transmit audio signals to the wireless communication module 4560 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 4510 with the wireless communication module 4560.
  • the processor 4510 communicates with the Bluetooth module in the wireless communication module 4560 through the UART interface to realize the Bluetooth function.
  • the audio module 4570 can transmit audio signals to the wireless communication module 4560 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 4510 with the display screen 4594, the camera 4593 and other peripheral devices.
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 4510 communicates with the camera 4593 through the CSI interface, so as to realize the photographing function of the electronic device 4500.
  • the processor 4510 communicates with the display screen 4594 through the DSI interface to implement the display function of the electronic device 4500.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 4510 with the camera 4593, the display screen 4594, the wireless communication module 4560, the audio module 4570, the sensor module 4580, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 4530 is an interface that conforms to the USB standard specification, which can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 4530 can be used to connect a charger to charge the electronic device 4500, and can also be used to transmit data between the electronic device 4500 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 4500 .
  • the electronic device 4500 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 4540 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 4540 may receive charging input from the wired charger through the USB interface 4530 .
  • the charging management module 4540 may receive wireless charging input through the wireless charging coil of the electronic device 4500 . While the charging management module 4540 charges the battery 4542, it can also supply power to the electronic device through the power management module 4541.
  • the power management module 4541 is used to connect the battery 4542 , the charging management module 4540 and the processor 4510 .
  • the power management module 4541 receives input from the battery 4542 and/or the charge management module 4540, and supplies power to the processor 4510, the internal memory 4521, the display screen 4594, the camera 4593, and the wireless communication module 4560.
  • the power management module 4541 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 4541 may also be provided in the processor 4510 .
  • the power management module 4541 and the charging management module 4540 may also be provided in the same device.
  • the wireless communication function of the electronic device 4500 may be implemented by the antenna 1, the antenna 2, the mobile communication module 4550, the wireless communication module 4560, the modem processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 4500 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 4550 can provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 4500 .
  • the mobile communication module 4550 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 4550 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 4550 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into electromagnetic waves for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 4550 may be provided in the processor 4510.
  • at least part of the functional modules of the mobile communication module 4550 may be provided in the same device as at least part of the modules of the processor 4510 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 4570A, the receiver 4570B, etc.), or displays images or videos through the display screen 4594.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 4510, and may be provided in the same device as the mobile communication module 4550 or other functional modules.
  • the wireless communication module 4560 can provide applications on the electronic device 4500 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 4560 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 4560 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 4510 .
  • the wireless communication module 4560 can also receive the signal to be sent from the processor 4510 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 4500 is coupled with the mobile communication module 4550, and the antenna 2 is coupled with the wireless communication module 4560, so that the electronic device 4500 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 4500 implements a display function through a GPU, a display screen 4594, and an application processor.
  • the GPU is a microprocessor for image processing, and connects the display screen 4594 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 4510 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 4594 is used to display images, videos, etc.
  • Display screen 4594 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matri45 organic light).
  • LED diode AMOLED
  • flexible light-emitting diode FLED
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • the electronic device 4500 may include 1 or N display screens 4594, where N is a positive integer greater than 1.
  • the electronic device 4500 can realize the shooting function through the ISP, the camera 4593, the video codec, the GPU, the display screen 4594 and the application processor.
  • the ISP is used to process the data fed back by the camera 4593. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 4593.
  • Camera 4593 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element can be a charge-coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 4500 may include 1 or N cameras 4593, where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 4500 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point, and so on.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 4500 may support one or more video codecs. In this way, the electronic device 4500 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture e45perts group, MPEG) 45, MPEG2, MPEG3, MPEG4, and so on.
  • Moving Picture Experts Group moving picture e45perts group, MPEG 45
  • MPEG2 Moving Picture Experts Group
  • MPEG3 MPEG4
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 4500 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 4520 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 4500.
  • the external memory card communicates with the processor 4510 through the external memory interface 4520 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 4521 may be used to store computer executable program code, which includes instructions.
  • the internal memory 4521 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 4500 and the like.
  • the internal memory 4521 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the processor 4510 executes various functional applications and data processing of the electronic device 4500 by executing instructions stored in the internal memory 4521 and/or instructions stored in a memory provided in the processor.
  • the electronic device 4500 can implement audio functions through an audio module 4570, a speaker 4570A, a receiver 4570B, a microphone 4570C, an earphone interface 4570D, and an application processor. Such as music playback, recording, etc.
  • the audio module 4570 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 4570 may also be used to encode and decode audio signals. In some embodiments, the audio module 4570 may be provided in the processor 4510 , or some functional modules of the audio module 4570 may be provided in the processor 4510 .
  • Speaker 4570A also known as "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 4500 can listen to music through speaker 4570A, or listen to hands-free calls.
  • the receiver 4570B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 4500 answers a call or a voice message, the voice can be answered by placing the receiver 4570B close to the human ear.
  • Microphone 4570C also known as “microphone”, “microphone”, is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 4570C through the human mouth, and input the sound signal into the microphone 4570C.
  • the electronic device 4500 may be provided with at least one microphone 4570C. In other embodiments, the electronic device 4500 may be provided with two microphones 4570C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 4500 may further be provided with three, four or more microphones 4570C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headphone jack 4570D is used to connect wired headphones.
  • the earphone interface 4570D can be a USB interface 4530, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 4580A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • pressure sensor 4580A may be provided on display screen 4594.
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 4580A, the capacitance between the electrodes changes.
  • the electronic device 4500 determines the intensity of the pressure based on the change in capacitance. When a touch operation acts on the display screen 4594, the electronic device 4500 detects the intensity of the touch operation according to the pressure sensor 4580A.
  • the electronic device 4500 can also calculate the touched position according to the detection signal of the pressure sensor 4580A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 4580B can be used to determine the motion attitude of the electronic device 4500 .
  • the angular velocity of electronic device 4500 about three axes may be determined by gyro sensor 4580B.
  • the gyro sensor 4580B can be used for image stabilization.
  • the gyroscope sensor 4580B detects the shaking angle of the electronic device 4500, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shaking of the electronic device 4500 through reverse motion to achieve anti-shake.
  • the gyroscope sensor 4580B can also be used for navigation and somatosensory game scenarios.
  • Air pressure sensor 4580C is used to measure air pressure. In some embodiments, the electronic device 4500 calculates the altitude from the air pressure value measured by the air pressure sensor 4580C to assist in positioning and navigation.
  • Magnetic sensor 4580D includes a Hall sensor.
  • the electronic device 4500 can detect the opening and closing of the flip holster using the magnetic sensor 4580D.
  • the electronic device 4500 can detect the opening and closing of the flip according to the magnetic sensor 4580D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 4580E can detect the magnitude of the acceleration of the electronic device 4500 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 4500 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 4500 can measure distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 4500 can use the distance sensor 4580F to measure the distance to achieve fast focusing.
  • Proximity light sensor 4580G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 4500 emits infrared light to the outside through light emitting diodes.
  • Electronic device 4500 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 4500 . When insufficient reflected light is detected, the electronic device 4500 may determine that there is no object near the electronic device 4500 .
  • the electronic device 4500 can use the proximity light sensor 4580G to detect that the user holds the electronic device 4500 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 4580G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 4580L is used to sense ambient light brightness.
  • the electronic device 4500 can adaptively adjust the brightness of the display screen 4594 according to the perceived brightness of the ambient light.
  • the ambient light sensor 4580L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 4580L can also cooperate with the proximity light sensor 4580G to detect whether the electronic device 4500 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 4580H is used to collect fingerprints.
  • the electronic device 4500 can use the collected fingerprint characteristics to unlock fingerprints, access application locks, take photos with fingerprints, answer incoming calls with fingerprints, and so on.
  • the temperature sensor 4580J is used to detect the temperature.
  • the electronic device 4500 utilizes the temperature detected by the temperature sensor 4580J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 4580J exceeds a threshold, the electronic device 4500 performs a performance reduction of the processor located near the temperature sensor 4580J in order to reduce power consumption and implement thermal protection.
  • the electronic device 4500 when the temperature is lower than another threshold, the electronic device 4500 heats the battery 4542 to avoid abnormal shutdown of the electronic device 4500 due to low temperature.
  • the electronic device 4500 boosts the output voltage of the battery 4542 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 4580K also known as "touch device”.
  • the touch sensor 4580K can be arranged on the display screen 4594, and the touch sensor 4580K and the display screen 4594 form a touch screen, also called “touch screen”.
  • the touch sensor 4580K is used to detect touch operations on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 4594.
  • the touch sensor 4580K may also be disposed on the surface of the electronic device 4500, which is different from the location where the display screen 4594 is located.
  • the bone conduction sensor 4580M can acquire vibration signals.
  • the bone conduction sensor 4580M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 4580M can also contact the human pulse and receive the blood pressure beating signal.
  • the bone conduction sensor 4580M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 4570 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 4580M, and realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 4580M, and realize the function of heart rate detection.
  • the keys 4590 include a power-on key, a volume key, and the like. Keys 4590 may be mechanical keys. It can also be a touch key.
  • the electronic device 4500 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 4500 .
  • Motor 4591 can generate vibration alerts.
  • the motor 4591 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 4591 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 4594 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 4592 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 4595 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 4500 by inserting into the SIM card interface 4595 or pulling out from the SIM card interface 4595 .
  • the electronic device 4500 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 4595 can support Nano SIM card, Micro SIM card, SIM card, etc.
  • the same SIM card interface 4595 can insert multiple cards at the same time.
  • the types of the plurality of cards may be the same or different.
  • the SIM card interface 4595 can also be compatible with different types of SIM cards.
  • the SIM card interface 4595 is also compatible with external memory cards.
  • the electronic device 4500 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 4500 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 4500 and cannot be separated from the electronic device 4500 .
  • the disclosed apparatus/electronic device and method may be implemented in other manners.
  • the above-described embodiments of the apparatus/electronic device are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) ), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media, etc. It should be noted that the content contained in the computer-readable storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer-readable Storage media exclude electrical carrier signals and telecommunications signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请适用于终端技术领域,提供了一种投屏方法、装置、电子设备及计算机可读存储介质。在本申请提供的投屏方法中,第一设备响应于内容选定操作,确定投屏内容,并对投屏内容进行处理,得到投屏图像。第一设备通过上述投屏方法进行投屏时,用户通过内容选定操作自行选定投屏内容,第一设备针对性地进行投屏,从而避免投屏图像中出现用户不想投屏的内容。

Description

投屏方法、装置、电子设备及计算机可读存储介质
本申请要求于2020年11月13日提交国家知识产权局、申请号为202011271351.8、申请名称为“投屏方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于终端技术领域,尤其涉及一种投屏方法、装置、电子设备及计算机可读存储介质。
背景技术
投屏是将一个电子设备的屏幕画面实时传递至另一个电子设备的屏幕进行显示的技术。
然而,在当前的投屏方案中,大多是将投屏端的电子设备的屏幕上当前显示的所有内容投屏到被投屏端的电子设备的屏幕上进行显示。
投屏方式单一,而且被投屏端可能出现用户不想投屏的内容。
发明内容
本申请实施例提供了一种投屏方法、装置、电子设备及计算机可读存储介质,可以解决当前的投屏方案投屏方式单一,且被投屏端有可能出现用户不想投屏的内容的问题。
第一方面,本申请实施例提供了一种投屏方法,包括:
第一设备将第一显示界面中被内容选定操作选中的对象确定为投屏内容,所述第一显示界面为所述第一设备当前屏幕显示的界面;
所述第一设备对所述投屏内容进行处理,得到投屏图像;
所述第一设备向第二设备发送所述投屏图像,所述投屏图像用于在所述第二设备的屏幕上显示。
需要说明的是,内容选定操作为用户对第一设备的操作。用户可以通过内容选定操作在第一设备当前屏幕显示的界面(即第一显示界面)中选择需要投屏的内容。
当第一设备检测到内容选定操作时,第一设备可以将第一显示界面中被内容选定操作选中的对象确定为投屏内容。
在确定了投屏内容之后,第一设备可以对投屏内容进行处理,得到投屏图像。
在得到投屏图像之后,第一设备可以根据预设的投屏协议对投屏图像进行编码,以编码数据的形式将投屏图像传递至第二设备。
第二设备接收到编码数据之后,可以对编码数据进行解码,得到投屏图像,将投屏图像显示在第二设备的屏幕上,完成本次的投屏操作。
内容选定操作的形式可以根据实际需求进行设置。例如,内容选定操作可以包括点击、拖动、长按等操作中的任意一种或多种的组合。
在第一方面的一种可能的实现方式中,所述投屏内容为文字和/或图像。
需要说明的是,上述投屏内容的形式根据实际需求进行设置。例如,在一些应用场景中,第一设备可以将投屏内容设置为文字,即用户可以选择文字进行投屏。在另一些应用场景中,第一设备可以将投屏内容设置为图像,即用户可以选择图像进行投屏。在另一些应用场景中,第一设备还可以将投屏内容设置为文字和图像。在另一些应用场景中,第一设备也可以将投屏内容设置为其他类型的对象。
在第一方面的一种可能的实现方式中,在所述第一设备将第一显示界面中被内容选定操作选中的对象确定为投屏内容之后,还包括:
所述第一设备检测是否已建立投屏连接;
若所述第一设备未建立投屏连接,则所述第一设备执行搜索操作,并显示第一列表,所述第一列表用于显示被搜索到的电子设备;
所述第一设备响应于对所述第一列表的选择操作,从所述被搜索到的电子设备中确定第二设备;
所述第一设备与所述第二设备建立投屏连接。
需要说明的是,第一设备在确定了投屏内容之后,可以检测是否已建立投屏连接。
如果第一设备未建立投屏连接,则第一设备可以执行搜索操作,搜索周围可被投屏的电子设备,并将搜索到的电子设备以第一列表的形式展示为用户。
用户在查看了第一列表之后,可以对第一列表执行选择操作,从第一列表中选择第二设备。
当第一设备检测到选择操作时,第一设备根据选择操作从被搜索到的电子设备中确定第二设备,并与第二设备建立投屏连接。
在第一方面的一种可能的实现方式中,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
所述第一设备获取所述投屏内容所在的目标图层;
所述第一设备对所述目标图层进行合成,得到投屏图像。
需要说明的是,第一设备在生成投屏图像时,可以获取投屏内容所在目标图层。
由于同一图层中的数据属于同一应用程序,因此目标图层的内容可视为与投屏内容强关联的内容。
此时,第一设备可以对目标图层进行合成,得到投屏图像。投屏图像中仅包含投屏内容以及与投屏内容强关联的内容,没有其他图层干扰,提高了投屏的准确性。
在第一方面的一种可能的实现方式中,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
所述第一设备获取所述第一显示界面的显示图像以及所述投屏内容的位置信息;
所述第一设备根据所述位置信息对所述显示图像进行截取操作,得到目标区域的截取图形数据,所述目标区域为所述投屏内容所在的区域;
所述第一设备对所述截取图形数据进行合成,得到投屏图像。
需要说明的是,第一设备在生成投屏图像时,也可以获取投屏内容的位置信息以及第一设备当前屏幕显示的界面的图像(即第一显示界面的显示图像)。
第一设备在获取到投屏内容的位置信息之后,可以根据位置信息确定目标区域(即 投屏内容所在的区域)。
然后,第一设备可以截取显示图像中目标区域的图形数据,得到截取图形数据。
之后,第一设备可以对截取图形数据进行合成,得到投屏图像。
由于第一设备是根据投屏内容自动确定目标区域,不是用户手动选取目标区域,因此,通过上述方法进行投屏时,可以提高框选目标区域的准确性,避免目标区域中存在用户不想投屏的内容,从而提高投屏的准确性。
在第一方面的一种可能的实现方式中,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
所述第一设备获取所述投屏内容所在的目标图层以及所述投屏内容的位置信息;
所述第一设备根据所述位置信息对所述目标图层进行截取操作,得到目标区域的区域图形数据,所述目标区域为所述投屏内容所在的区域;
所述第一设备对所述区域图形数据进行合成,得到投屏图像。
需要说明的是,第一设备在生成投屏图像时,也可以先获取投屏图像所在的目标图层以及投屏内容的位置信息。
然后,第一设备可以根据上述位置信息对目标图层进行截取操作,得到目标区域的区域图形数据。
上述目标图层可以是一个图层,或者,也可以是多个图层。相应的,上述区域图形数据可以是一个或多个。
上述对区域图形数据进行合成是指将一个或多个目标图层对应的区域图形数据合成为一张投屏图像。
通过上述方法生成投屏图像,可以确保投屏图像中仅存在被选中的投屏内容,且投屏内容不会被其他无关图层的内容遮挡,提高了投屏的准确性。
在第一方面的一种可能的实现方式中,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容;
所述第一设备对所述第一内容对应的图形数据进行渲染,得到投屏图像。
需要说明的是,第一设备在生成投屏图像时,可以先获取第二设备的投屏配置信息,根据投屏配置信息对投屏内容进行排版,得到第一内容。
之后,第一设备可以获取第一内容对应的图形数据,对第一内容对应的图形数据进行渲染和合成,得到投屏图像。
在第一方面的一种可能的实现方式中,所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容,包括:
所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
所述第一设备将所述排版后的投屏内容设置为第一内容。
需要说明的是,第一设备在确定了投屏内容之后,可以仅对本次确定的投屏内容进行投屏。
此时,第一设备可以根据投屏配置信息对投屏内容进行排版,得到排版后的投屏 内容。
然后,第一设备将排版后的投屏内容设置为第一内容,清除、覆盖或替换前一次生成的第一内容。
在第一方面的一种可能的实现方式中,所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容,包括:
所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
所述第一设备对上一次生成的第一内容和所述排版后的投屏内容进行组合,得到新的第一内容。
需要说明的是,第一设备在确定了投屏内容之后,也可以将本次确定的投屏内容与上一次生成的第一内容合并投屏。
此时,第一设备可以根据投屏配置信息对投屏内容进行排版,得到排版后的投屏内容。
然后,第一设备对上一次生成的第一内容和排版后的投屏内容进行组合,得到新的第一内容。
组合的方式可以根据实际需求进行设置。例如,在一些应用场景中,可以将排版后的投屏内容续接在上一次生成的第一内容的末尾,得到新的第一内容。
在第一方面的一种可能的实现方式中,所述投屏配置信息包括所述第二设备的屏幕分辨率和/或所述第二设备的屏幕尺寸。
需要说明的是,上述投屏配置信息可以包括第二设备的屏幕分辨率、第二设备的屏幕尺寸、字体、颜色、字号等信息中的任意一种或多种的组合。
第二方面,本申请实施例提供了一种投屏装置,包括:
内容选定模块,用于将第一显示界面中被内容选定操作选中的对象确定为投屏内容,所述第一显示界面为第一设备当前屏幕显示的界面;
图像生成模块,用于对所述投屏内容进行处理,得到投屏图像;
图像发送模块,用于向第二设备发送所述投屏图像,所述投屏图像用于在所述第二设备的屏幕上显示。
在第二方面的一种可能的实现方式中,所述投屏内容为文字和/或图像。
在第二方面的一种可能的实现方式中,所述装置还包括:
连接检测模块,用于检测是否已建立投屏连接;
设备搜索模块,用于若所述第一设备未建立投屏连接,则执行搜索操作,并显示第一列表,所述第一列表用于显示被搜索到的电子设备;
设备选择模块,用于响应于对所述第一列表的选择操作,从所述被搜索到的电子设备中确定第二设备;
连接建立模块,用于与所述第二设备建立投屏连接。
在第二方面的一种可能的实现方式中,所述图像生成模块,包括:
目标图层子模块,用于获取所述投屏内容所在的目标图层;
图层合成子模块,用于对所述目标图层进行合成,得到投屏图像。
在第二方面的另一种可能的实现方式中,所述图像生成模块,包括:
位置获取子模块,用于获取所述第一显示界面的显示图像以及所述投屏内容的位置信息;
位置截取子模块,用于根据所述位置信息对所述显示图像进行截取操作,得到目标区域的截取图形数据,所述目标区域为所述投屏内容所在的区域;
截取合成子模块,用于对所述截取图形数据进行合成,得到投屏图像。
在第二方面的另一种可能的实现方式中,所述图像生成模块,包括:
目标获取子模块,用于获取所述投屏内容所在的目标图层以及所述投屏内容的位置信息;
区域截取子模块,用于根据所述位置信息对所述目标图层进行截取操作,得到目标区域的区域图形数据,所述目标区域为所述投屏内容所在的区域;
图像合成子模块,用于对所述区域图形数据进行合成,得到投屏图像。
在第二方面的另一种可能的实现方式中,所述图像生成模块,包括:
内容排版子模块,用于获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容;
图像渲染子模块,用于对所述第一内容对应的图形数据进行渲染,得到投屏图像。
在第二方面的一种可能的实现方式中,所述内容排版子模块,包括:
第一排版子模块,用于获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
内容设置子模块,用于将所述排版后的投屏内容设置为第一内容。
在第二方面的另一种可能的实现方式中,所述内容排版子模块,包括:
第二排版子模块,用于获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
内容合并子模块,用于对上一次生成的第一内容和所述排版后的投屏内容进行组合,得到新的第一内容。
在第二方面的一种可能的实现方式中,所述投屏配置信息包括所述第二设备的屏幕分辨率和/或所述第二设备的屏幕尺寸。
第三方面,提供了一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,电子设备实现如上述方法的步骤。
第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时,使得电子设备实现如上述方法的步骤。
第五方面,提供了一种芯片***,所述芯片***可以为单个芯片,或者多个芯片组成的芯片模组,所述芯片***包括存储器和处理器,所述处理器执行所述存储器中存储的计算机程序,以实现如上述方法的步骤。
本申请实施例与现有技术相比存在的有益效果是:
在本申请提供的投屏方法中,第一设备响应于用户的内容选定操作,确定投屏内容。之后,第一设备对投屏内容进行处理,得到投屏图像,并向第二设备发送投屏图像。
当第一设备通过上述投屏方法进行投屏时,第一设备不会将第一显示界面的全部 内容投屏至第二设备,而是根据被选中的投屏内容针对性地进行投屏,投屏方式灵活,避免出现用户不想投屏的内容,具有较强的易用性和实用性。
附图说明
图1是本申请实施例提供的一种投屏***的结构示意图;
图2是本申请实施例提供的一种应用场景的示意图;
图3是本申请实施例提供的另一种应用场景的示意图;
图4是本申请实施例提供的另一种应用场景的示意图;
图5是本申请实施例提供的另一种应用场景的示意图;
图6是本申请实施例提供的另一种应用场景的示意图;
图7是本申请实施例提供的另一种应用场景的示意图;
图8是本申请实施例提供的另一种应用场景的示意图;
图9是本申请实施例提供的另一种应用场景的示意图;
图10是本申请实施例提供的另一种应用场景的示意图;
图11是本申请实施例提供的另一种应用场景的示意图;
图12是本申请实施例提供的另一种应用场景的示意图;
图13是本申请实施例提供的另一种应用场景的示意图;
图14是本申请实施例提供的另一种应用场景的示意图;
图15是本申请实施例提供的另一种应用场景的示意图;
图16是本申请实施例提供的另一种应用场景的示意图;
图17是本申请实施例提供的另一种应用场景的示意图;
图18是本申请实施例提供的另一种应用场景的示意图;
图19是本申请实施例提供的另一种应用场景的示意图;
图20是本申请实施例提供的另一种应用场景的示意图;
图21是本申请实施例提供的另一种应用场景的示意图;
图22是本申请实施例提供的另一种应用场景的示意图;
图23是本申请实施例提供的另一种应用场景的示意图;
图24是本申请实施例提供的另一种应用场景的示意图;
图25是本申请实施例提供的另一种应用场景的示意图;
图26是本申请实施例提供的另一种应用场景的示意图;
图27是本申请实施例提供的另一种应用场景的示意图;
图28是本申请实施例提供的另一种应用场景的示意图;
图29是本申请实施例提供的另一种应用场景的示意图;
图30是本申请实施例提供的另一种应用场景的示意图;
图31是本申请实施例提供的另一种应用场景的示意图;
图32是本申请实施例提供的另一种应用场景的示意图;
图33是本申请实施例提供的另一种应用场景的示意图;
图34是本申请实施例提供的另一种应用场景的示意图;
图35是本申请实施例提供的一种投屏方法的流程示意图;
图36是本申请实施例提供的另一种应用场景的示意图;
图37是本申请实施例提供的另一种应用场景的示意图;
图38是本申请实施例提供的另一种应用场景的示意图;
图39是本申请实施例提供的另一种应用场景的示意图;
图40是本申请实施例提供的另一种应用场景的示意图;
图41是本申请实施例提供的另一种应用场景的示意图;
图42是本申请实施例提供的另一种应用场景的示意图;
图43是本申请实施例提供的另一种应用场景的示意图;
图44是本申请实施例提供的另一种投屏方法的流程示意图;
图45是本申请实施例提供的电子设备的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例提供的投屏方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子设备的具体类型不作任何限制。
投屏是指将一个电子设备的屏幕画面实时传递至另一个电子设备的屏幕进行显示的技术。
当前存在一些较为成熟的投屏方案,例如,适用于苹果移动操作***(iPhone Operation System,iOS)和麦金塔(Macintosh,Mac)***的隔空播放(AirPlay)技术、Wi-Fi联盟制定的Miracast协议、以及与Miracast协议同源的无线高清(WirelessDisplay,WiDi)技术等。
这些投屏方案可以将投屏端(发起投屏的电子设备,也可以称为源端,Source端)的完整的屏幕画面进行合成和编码,然后将编码后的图像发送至被投屏端(响应投屏的电子设备,也可以称为接收端,Sink端)。Sink端对编码后的图像进行解码,并以镜像的方式显示在屏幕上。
这些方案可以将Source端的完整的屏幕画面发送至Sink端的屏幕进行显示,但是,有的用户可能只想分享Source端的屏幕的部分内容,并不想将其他内容分享给Sink端。
因此,在现有的投屏方案中,存在投屏方式单一,而且被投屏端可能出现用户不想投屏的内容的问题。
有鉴于此,本申请实施例提供了一种投屏方法、装置、电子设备及计算机可读存储介质,可以使投屏区域准确地覆盖用户想要投屏的内容,解决了现有的投屏方案中,投屏方式单一,而且被投屏端可能出现用户不想投屏的内容的问题,具有较强的易用性和实用性。
首先,以图1所示的投屏***为例,该投屏***是本申请实施例提供的投屏方法适用的一种***。
如图1所示,该投屏***中包括至少一个第一设备101(图1中仅示出一个)和至少一个第二设备102(图1中仅示出一个)。
其中,第一设备101为发起投屏的电子设备,第二设备102为响应投屏的电子设备。
上述第一设备101和上述第二设备102均设置有无线通信模块。第一设备101可以通过本设备的无线通信模块以及预设的投屏协议与第二设备102的无线通信模块建立投屏连接103。
预设的投屏协议可以为AirPlay协议、Miracast协议、WiDi协议、数字生活网络联盟(DIGITAL LIVING NETWORK ALLIANCE,DLNA)协议等投屏协议中的任意一种。或者,预设的投屏协议也可以为厂商自定义的投屏协议。本申请实施例对投屏协议的具体类型不予限制。
当第一设备101与第二设备102建立了投屏连接103时,第一设备101可以响应于用户的操作,将用户想要分享的内容传递至第二设备102。
第二设备102接收到第一设备101传输的内容后,将上述内容显示在第二设备102的屏幕上,完成投屏操作。
以下,将根据图1所示的投屏***并结合具体的应用场景,对本申请实施例提供的投屏方法进行详细描述。
1、确定投屏内容。
当用户希望将第一设备显示的内容分享给第二设备时,用户可以执行内容选定操作。此时,第一设备可以响应于用户的内容选定操作,确定投屏内容,并执行投屏操 作。
上述内容选定操作的形式可以根据实际场景进行设置。例如,当第一设备设置有触控屏时,内容选定操作可以包括长按屏幕、点击屏幕、滑动屏幕等手势操作中的任意一种或多种的组合。当第一设备设置有鼠标、手写笔等辅助操作的配件设备时,内容选定操作可以包括点击配件设备、拖动配件设备等操作中的任意一种或多种的组合。
上述投屏内容可以包括文字、图片、网页控件等对象中的任意一种或多种的组合。
例如,请参阅图2,假设第一设备为手机。当用户想要分享手机当前显示界面中的文字时,用户可以对手机的屏幕进行操作,长按文字部分的内容。
此时,手机可以响应于用户的长按操作,显示内容选择框以及操作选项。内容选择框为图2中斜线填充区域。操作选项可以包括“复制”、“分享”、“全选”、“投屏”等选项。
如图3和图4所示,当用户拖动内容选择框时,手机可以响应于用户对内容选择框的拖动操作,向两侧扩展内容选择框的覆盖区域,框选相应的文字。
如图5所示,在用户完成对内容选择框的拖动操作之后,用户点击操作选项中的“投屏”选项。此时,手机可以响应于用户对操作选项中“投屏”选项的点击操作,将内容选择框当前框选的文字“测试文字测”确定为投屏内容。
又比如,请参阅图6、图7和图8,假设第一设备为台式电脑,该台式电脑配备有鼠标。
当用户想要分享台式电脑当前显示界面中的文字时,用户可以长按鼠标左键,并拖动鼠标。此时,台式电脑可以响应于用户对鼠标的点击和拖动操作,显示内容选择框,并调整内容选择框的覆盖区域(即图7和图8中斜线填充区域),框选相应的文字。
如图8所示,用户在选中相应的文字之后,可以在内容选择框的覆盖区域内点击鼠标的右键。此时,台式电脑响应于用户的点击操作,显示操作选项。操作选项可以包括“复制”、“分享”、“全选”、“投屏”等选项。
然后,用户控制鼠标,左键点击“投屏”选项。此时,台式电脑响应于用户对操作选项中“投屏”选项的点击操作,将内容选择框当前框选的文字“测试文字测”确定为投屏内容。
又比如,请参阅图9,假设第一设备为手机。当用户想要分享手机当前显示界面中的图像时,用户可以长按想要分享的图像。
此时,手机可以响应于用户的长按操作,显示操作选项。操作选项可以包括“下载图像”、“分享图像”、“投屏图像”等选项。
然后,如图10所示,用户在选中了需要投屏的图像之后,可以点击操作选项中的“投屏图像”选项。此时,手机可以响应于用户对操作选项中“投屏图像”选项的点击操作,将上述被选中的图像确定为投屏内容。
2、建立投屏连接。
在第一设备基于内容选定操作确定了投屏内容之后,第一设备可以检测本设备是否已经与第二设备建立投屏连接。
在一些实施例中,第一设备可能检测到本设备已经与第二设备建立了投屏连接。 此时,第一设备可以跳过与第二设备建立投屏连接的操作。
在另一些实施例中,第一设备可能检测到本设备未与第二设备建立投屏连接。此时,第一设备可以执行搜索操作,查找周围可被投屏的电子设备,并展示被搜索到的电子设备的设备列表(即上述第一列表)。
当第一设备展示设备列表时,用户可以通过设备列表直观的查看到周围可被投屏的电子设备。然后,用户可以对上述设备列表执行选择操作,从设备列表展示的各个电子设备中选择第二设备。
当第一设备检测到用户的选择操作时,第一设备可以响应于上述选择操作,确定第二设备,并发送投屏请求至第二设备。
当第二设备接收到投屏请求时,第二设备可以根据预设的响应规则同意该投屏请求或拒绝该投屏请求;或者,第二设备也可以响应于用户对第二设备的操作,同意该投屏请求或拒绝该投屏请求。
如果第二设备同意上述投屏请求,则第一设备与第二设备建立投屏连接,第一设备创建虚拟显示窗口。虚拟显示窗口用于管理投屏图像。
如果第二设备拒绝上述投屏请求,则第一设备停止投屏操作,或者,第一设备也可以重新展示上述列表,以便用户重新选择第二设备。
预设的响应规则可以根据实际场景进行设置。例如,在一些实施例中,预设的响应规则可以设置为:拒绝黑名单中的电子设备发送的投屏请求,默认通过黑名单以外的电子设备发送的投屏请求。或者,在另一些实施例中,预设的响应规则也可以设置为其他条件。本申请实施例对预设的响应规则的具体条件不予限制。
例如,请参阅图10,假设第一设备为手机。手机响应于用户对当前显示界面中某张图像的长按操作,显示操作选项。操作选项可以包括“下载图像”、“分享图像”、“投屏图像”等选项。
用户想要将被选中的图像投屏至其他电子设备,点击了“投屏图像”的选项。然后,手机响应于用户对操作选项中“投屏图像”选项的点击操作,将上述被选中的图像确定为投屏内容。
之后,手机检测本设备是否已经与第二设备建立了投屏连接。
如图11所示,当手机检测到本设备未与第二设备建立投屏连接时,手机执行搜索操作,搜索周围可被投屏的电子设备,并显示搜索界面。
如图12所示,手机在完成搜索操作之后,生成设备列表,并将设备列表显示在手机屏幕当前的显示界面上。设备列表中包括被搜索到的电子设备的标识。
假设设备列表中包括“智能电视”、“笔记本电脑”和“台式电脑”这三个被搜索到的电子设备。用户查看了上述列表之后,点击了“智能电视”的选项,想要将上述投屏内容投屏至智能电视。
此时,如图13所示,手机响应于用户对“智能电视”选项的点击操作,发送投屏请求至智能电视。
智能电视在接收到投屏请求后,检测到发送投屏请求的手机不在黑名单中,则默认同意投屏请求,与手机建立投屏连接。
或者,智能电视也可以在智能电视的屏幕上显示提示框,在提示框中询问用户是 否同意投屏。如果用户点击了提示框中“是”的选项,则智能电视同意上述投屏请求,与手机建立投屏连接。如果用户点击了提示框中“否”的选项,则智能电视拒绝上述投屏请求,返回错误信息至手机。
3、生成投屏图像。
在建立了投屏连接之后,第一设备可以根据投屏内容生成投屏图像,并通过上述投屏连接将投屏图像传递至第二设备进行显示。
在一些实施例中,第一设备可以通过预设的控件获取被选中的投屏内容。例如,当被选中的投屏内容为文字时,第一设备可以通过文字控件获取到被选中的文字;当被选中的投屏内容为图像时,第一设备可以通过图像控件获取到被选中的图像。
在获取到投屏内容之后,第一设备可以根据第二设备的投屏配置信息对上述投屏内容进行排版,得到第一内容。
上述投屏配置信息的内容可以根据实际需求进行设置。例如,在一些实施例中,上述投屏配置信息可以包括第二设备的屏幕分辨率、屏幕尺寸、字体、字号、颜色等信息中的一种或多种。
并且,上述投屏配置信息的获取时机也可以根据实际需求进行设置。例如,在一些实施例中,第一设备可以在与第二设备建立了投屏连接之后,立即获取第二设备的投屏配置信息;或者,在另一些实施例中,第一设备也可以在对投屏内容进行排版时,获取第二设备的投屏配置信息;或者,在另一些实施例中,第一设备也可以在其他时机获取第二设备的投屏配置信息。本申请实施例对第一设备获取第二设备的投屏配置信息的时机不予限制。
第一设备在生成第一内容之后,可以对第一内容对应的图形数据进行渲染和合成,得到投屏图像。
第一设备在得到投屏图像之后,可以根据预设的投屏协议对投屏图像进行编码,得到编码数据,并通过上述投屏连接将编码数据传递至第二设备。
第二设备接收到编码数据之后,根据预设的投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在第二设备的屏幕上,完成本次投屏操作。
例如,假设第一设备为手机,第二设备为智能电视,手机的操作***为安卓操作***。如图14和图15所示,斜线填充部分为内容选择框,内容选择框内的文字和图像为被用户选中的投屏内容。
在用户点击了操作选项中的“投屏”选项之后,手机可以通过投屏服务创建虚拟显示窗口(display)和该虚拟显示窗口对应的活动管理(Activity)。
然后,投屏服务通过文字控件(TextView)将被选中的文字传递至Activity,通过图像控件(ImageView)将被选中的图像传递至Activity。
并且,Activity可以通过局域网(Local Area Network,LAN)服务获取第二设备的投屏配置信息。投屏配置信息可以包括第二设备的屏幕尺寸、第二设备的显示分辨率等信息。
如图16所示,Activity根据第二设备的投屏配置信息调整虚拟显示窗口的窗口尺寸,并根据第二设备的投屏配置信息对获取到投屏内容进行排版,调整文字和图像的位置、大小等,从而得到排版后的第一内容。
之后,Activity可以将第一内容的图形数据传递至图形处理器(Graphics Processing Unit,GPU)进行渲染,并将渲染后的图形数据传递至虚拟显示窗口。
虚拟显示窗口将渲染后的图形数据传递至窗口合成服务(SurfaceFlinger),窗口合成服务对渲染后的图形数据进行合成,得到投屏图像。
如图17所示,在得到投屏图像之后,投屏服务可以根据预设的投屏协议对投屏图像进行编码,得到编码数据,并将编码数据通过上述投屏连接传递至智能电视。
智能电视根据预设的投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在智能电视的屏幕上,完成本次的投屏操作。
在另一些实施例中,第一设备可以获取投屏内容所在的目标图层。
例如,假设第一设备当前显示的界面由应用程序A对应的图层a、应用程序B对应的图层b和应用程序C对应的图层c合成得到,被选中的投屏内容为应用程序B和应用程序C的内容,则第一设备可以先获取图层b和图层c。
然后,第一设备可以对目标图层进行合成操作,得到投屏图像。
例如,假设第一设备为手机,第二设备为智能电视,手机的操作***为安卓操作***。
假设手机当前显示的界面为图18所示的内容。如图19所示,手机当前显示的界面由图层a、图层b和图层c合成得到。
如图20所示,手机可以响应于用户的操作,从手机当前显示的界面中确定投屏内容(即图20中斜线填充区域)。
在用户点击了操作选项中的“投屏”选项之后,手机可以通过投屏服务创建虚拟显示窗口(display)和该虚拟显示窗口对应的活动管理(Activity)。
然后,投屏服务可以将投屏内容对应的目标图层(即图层b)传递至Activity。
Activity将图层b传递至虚拟显示窗口。虚拟显示窗口将图层b传递至窗口合成服务(SurfaceFlinger),窗口合成服务对图层b进行合成,得到投屏图像。
如图21所示,在得到投屏图像之后,投屏服务可以根据预设的投屏协议对投屏图像进行编码,得到编码数据,并将编码数据通过上述投屏连接传递至智能电视。
智能电视根据预设的投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在智能电视的屏幕上,完成本次的投屏操作。
由于同一图层的内容属于同一应用程序,可以认为同一图层的内容的关联性较强。因此,第一设备通过上述方法生成投屏图像时,投屏图像中仅包含投屏内容以及与投屏内容强关联的内容,没有无关图层的干扰,可以避免投屏内容被遮挡,提高了投屏的准确性。
在另一些实施例中,第一设备可以获取第一设备当前显示的界面的显示图像以及投屏内容的位置信息。
第一设备可以根据投屏内容的位置信息确定投屏内容所在的区域,即目标区域。
然后,第一设备可以对显示图像执行截取操作,截取显示图像的目标区域的图形数据,即截取图形数据。
然后,第一设备对上述截取图形数据进行合成,得到投屏图像。
例如,假设第一设备为手机,第二设备为智能电视,手机的操作***为安卓操作 ***,手机当前显示的界面为图18所示的内容。
如图20所示,手机可以响应于用户的操作,从手机当前显示的界面中确定投屏内容(即图20中斜线填充区域)。
在用户点击了操作选项中的“投屏”选项之后,手机可以通过投屏服务创建虚拟显示窗口(display)和该虚拟显示窗口对应的活动管理(Activity)。
然后,投屏服务可以将手机当前显示的界面的显示图像和投屏内容对应的位置信息传递至Activity。
如图22所示,Activity在获取到显示图像和投屏内容对应的位置信息之后,根据上述位置信息对显示图像进行截取操作,得到目标区域的截取图形数据。
之后Activity将截取图形数据传递至虚拟显示窗口。虚拟显示窗口将截取图形数据传递至窗口合成服务(SurfaceFlinger),窗口合成服务对区域图形数据进行合成,得到投屏图像。
在得到投屏图像之后,投屏服务可以根据预设的投屏协议对投屏图像进行编码,得到编码数据,并将编码数据通过上述投屏连接传递至智能电视。
智能电视根据预设的投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在智能电视的屏幕上,完成本次的投屏操作。
第一设备通过上述方法生成投屏图像时,目标区域是根据投屏内容的位置信息自动确定的,不是用户手动框选的。因此,投屏图像只包含投屏内容所在区域的内容,不包含投屏内容所在区域以外的内容,从而避免将用户不想分享的内容投屏至第二设备。
在另一些实施例中,第一设备可以先获取被选中的投屏内容所在的目标图层。
例如,假设第一设备当前显示的界面由应用程序A对应的图层a、应用程序B对应的图层b和应用程序C对应的图层c合成得到,被选中的投屏内容为应用程序B的内容,则第一设备可以先获取图层b。
在获取到目标图层之后,第一设备可以获取被选中的投屏内容的位置信息,根据投屏内容的位置信息对目标图层进行截取操作,得到目标区域的区域图形数据。目标区域为投屏内容所在的区域。
在得到区域图形数据之后,第一设备可以对区域图形数据进行合成,得到投屏图像。
在得到投屏图像之后,第一设备可以根据预设的投屏协议对投屏图像进行编码,得到编码数据,并通过上述投屏连接将编码数据传递至第二设备。
第二设备接收到编码数据之后,根据预设的投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在第二设备的屏幕上,完成本次投屏操作。
例如,假设第一设备为手机,第二设备为智能电视,手机的操作***为安卓操作***。
假设手机当前显示的界面为图18所示的内容。如图19所示,手机当前显示的界面由图层a、图层b和图层c合成得到。
如图20所示,手机可以响应于用户的操作,从手机当前显示的界面中确定投屏内容(即图20中斜线填充区域)。
在用户点击了操作选项中的“投屏”选项之后,手机可以通过投屏服务创建虚拟显示窗口(display)和该虚拟显示窗口对应的活动管理(Activity)。
然后,投屏服务可以将投屏内容对应的目标图层(即图层b)以及投屏内容对应的位置信息传递至Activity。
如图23所示,Activity在获取到图层b和投屏内容对应的位置信息之后,根据上述位置信息对图层b进行截取操作,得到目标区域的区域图形数据。
如图24所示,Activity将区域图形数据传递至虚拟显示窗口。虚拟显示窗口将区域图形数据传递至窗口合成服务(SurfaceFlinger),窗口合成服务对区域图形数据进行合成,得到投屏图像。
如图25所示,在得到投屏图像之后,投屏服务可以根据预设的投屏协议对投屏图像进行编码,得到编码数据,并将编码数据通过上述投屏连接传递至智能电视。
智能电视根据预设的投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在智能电视的屏幕上,完成本次的投屏操作。
4、多次投屏。
在用户完成了投屏操作之后,如果还需要执行新的投屏操作,则用户可以重新执行内容选定操作,第一设备根据用户的内容选定操作确定新的投屏内容。
之后,第一设备可以参照第3节所描述的内容,根据新的投屏内容进行图像截取和合成操作,生成新的投屏图像。
或者,第一设备也可以根据新的投屏内容生成新的第一内容,并根据新的第一内容确定新的投屏图像。
在一些可能的实现方式中,当第一设备在生成新的第一内容时,第一设备可以替换、覆盖或清除前一次投屏操作生成的第一内容,仅对本次选中的投屏内容进行处理。
此时,第一设备可以根据第二设备的投屏配置信息,对本次选中的投屏内容进行排版,得到本次投屏操作对应的第一内容。
然后,第一设备对新的第一内容进行渲染和合成,得到新的投屏图像,并通过上述投屏连接将新的投屏图像传递至第二设备进行显示。
例如,请参阅图26,假设第一设备为手机。当用户想要再次投屏,并通过内容选定操作选中了新的投屏内容(即图26中斜线填充区域的文字和图像)时,手机可以显示操作选项。操作选项可以包括“复制”、“分享”、“全选”、“新建投屏”、“合并投屏”等选项。
当用户点击“新建投屏”的选项时,手机可以响应于用户的操作,清除前一次投屏操作生成的第一内容。
如图27和图28所示,手机通过文字控件获取被选中的文字,通过图像控件获取被选中的图像,并根据第二设备的投屏配置信息对当前被选中文字和图像进行排版,得到本次投屏对应的第一内容。
之后,手机可以对新的第一内容的图形数据进行渲染,将渲染后的图形数据合成为新的投屏图像,并将新的投屏图像发送至对端的第二设备进行显示。
在另一些可能的实现方式中,当第一设备在生成新的第一内容时,第一设备也可以不清除前一次投屏的第一内容。
此时,第一设备可以根据第二设备的投屏配置信息,对前一次投屏操作的第一内容和本次选中的投屏内容进行排版,得到新的第一内容。
或者,由于前一次投屏操作的第一内容已根据第二设备的投屏配置信息进行排版,所以,第一设备也可以根据第二设备的投屏配置信息对本次选中的投屏内容进行排版,并将排版后的投屏内容与前一次投屏操作生成的第一内容进行组合,得到新的第一内容。
在得到新的第一内容之后,第一设备对新的第一内容进行渲染和合成,得到新的投屏图像,并通过上述投屏连接将新的投屏图像传递至第二设备进行显示。
例如,请参阅图29,假设第一设备为手机。当用户想要再次投屏,并通过内容选定操作选中了新的投屏内容(即图27中斜线填充区域的文字)时,手机可以显示操作选项。操作选项可以包括“复制”、“分享”、“全选”、“新建投屏”、“合并投屏”等选项。
当用户点击“合并投屏”的选项时,手机可以响应于用户的操作,不清除前一次投屏的第一内容。
如图30和图31所示,手机通过文字控件获取被选中的文字,并根据第二设备的投屏配置信息对本次被选中文字和图像进行排版,得到排版后的投屏内容。
如图32所示,手机可以将排版后的投屏内容与上一次的第一内容进行合并,将排版后的投屏内容置于上一次的第一内容的尾部,得到新的第一内容。
得到新的第一内容后,手机对新的第一内容的图形数据进行渲染,并将渲染后的图形数据合成为新的投屏图像。
5、结束投屏。
用户在完成投屏操作之后,可以对第一设备执行结束投屏操作。当第一设备检测到结束投屏操作时,第一设备断开与第二设备的投屏连接,结束投屏。
第二设备在检测到投屏连接断开时,可以取消显示投屏图像,或者,第二设备也可以继续显示投屏图像。
例如,请参阅图33,假设第一设备为手机,第二设备为智能电视。手机与智能电视之间建立有投屏连接。
当用户想要结束投屏时,可以下拉手机的操作栏。手机的操作栏中可能包括“无线网络”、“蓝牙”、“移动数据”、“静音”、“无线投屏”等操作选项。
然后,用户可以点击“无线投屏”的选项。如图34所示,当手机检测到用户对“无线投屏”的点击操作时,关闭投屏功能,停止投屏服务,断开与第二设备的投屏连接。
当智能电视检测到投屏连接断开时,智能电视可以继续显示最后一次接收到的投屏图像,或者,智能电视也可以取消显示最后一次接收到的投屏图像,显示待机界面。
或者,在另一些可能的实现方式中,用户也可以在第二设备执行结束投屏操作。当第二设备检测到结束投屏操作时,第二设备断开与第一设备的投屏连接。在断开投屏连接之后,第二设备可以取消显示投屏图像,或者,第二设备也可以继续显示投屏图像。
例如,假设第一设备为手机,第二设备为智能电视。手机与智能电视之间建立有投屏连接,智能电视配备有遥控器。
当用户想要结束投屏时,用户可以按压智能电视的遥控器上结束投屏的按钮,智 能电视的遥控器发送结束投屏信号至智能电视。
智能电视接收到结束投屏信号时,断开与手机的投屏连接,显示待机界面。
当手机检测到投屏连接断开时,手机可以执行预设的提示操作,显示弹窗。该弹窗用于告知用户投屏连接已断开。或者,手机也可以执行重新执行搜索操作,并在搜索操作完成后,显示设备列表,以便用户根据设备列表选择新的第二设备。
在另一些可能的实现方式中,除了用户主动结束投屏操作以外,第一设备和/或第二设备也可以设置有结束投屏操作的判定规则。
当满足上述判定规则的条件时,第一设备或第二设备可以自动结束投屏操作,断开投屏连接。
上述判定规则可以根据实际需求进行设置。例如,在一些实施例中,上述判定规则可以与闲置时长相关。闲置时长为当前时间与触发时间的时间间隔,触发时间为最近一次第一设备发送投屏图像时间或最近一次第二设备接收到投屏图像的时间。当第一设备或第二设备检测到闲置时长大于或等于预设时长阈值时,第一设备或第二设备可以自动结束投屏操作,断开投屏连接。在另一些实施例中,上述判定规则可以与投屏次数有关。投屏次数为第一设备发送投屏图像的数量或第二设备接收投屏图像的数量。当第一设备或第二设备检测到投屏次数大于或等于预设次数阈值时,第一设备或第二设备可以自动结束投屏操作,断开投屏连接。
为了便于理解,以下将结合具体的应用场景对上述投屏方法进行详细描述:
请参阅图35和图36,假设第一设备为手机。在初始时刻,手机未与其他电子设备建立投屏连接。
在第一时刻,用户想要将手机中的部分文字投屏至智能电视。此时,用户可以长按手机的屏幕。
手机响应于用户的长按操作,显示内容选择框和操作选项。操作选项中包括“复制”、“分享”、“全选”、“投屏”这四个选项。
在手机显示了内容选择框之后,用户拖拉内容选择框两侧的调整光标,调节内容选择框的覆盖区域,使内容选择框覆盖“测试文字”这四个字。
然后,用户点击“投屏”选项。手机响应于用户的点击操作,将手机当前的显示界面(即第一显示界面)中被内容选定操作选中的内容确定为投屏内容。
并且,手机检测当前是否已经与其他电子设备建立投屏连接。
如图37所示,手机检测到当前未与其他电子设备建立投屏连接,则手机执行搜索操作,并显示第一列表。第一列表为手机搜索到的可被投屏的电子设备的设备列表。
假设第一列表中包括“智能电视”、“笔记本电脑”和“平板电脑”这三个被搜索到的电子设备。用户在查看了设备列表之后,可以执行选择操作,从第一列表中选择“智能电视”作为第二设备。
此时,手机可以响应于用户对第一列表的选择操作,从被扫描到的电子设备中确定第二设备,将智能电视设置为第二设备。
如图38所示,在确定了第二设备之后,手机向智能电视发送投屏请求。智能电视在接收到投屏请求后,检测到手机为可信任设备,则同意了该投屏请求,手机和智能电视建立投屏连接。
如图39所示,在建立投屏连接之后,手机通过投屏连接获取智能电视的投屏配置信息,并根据投屏配置信息对上述投屏内容进行排版,得到第一内容。
在得到第一内容之后,手机通过图形处理器对第一内容对应的图形数据进行渲染,并通过窗口合成器对渲染后的图形数据进行合成,得到投屏图像。
然后,手机根据预设投屏协议对投屏图像进行编码,得到编码数据,并向智能电视发送编码数据。
如图40所示,智能电视在接收到编码数据之后,根据预设投屏协议对编码数据进行解码,得到投屏图像,将投屏图像显示在本设备的屏幕上。
请参阅图41,假设用户在浏览新的显示界面的过程,想要将新的内容分享到智能电视。此时,用户可以重新执行内容选定操作,确定新的投屏内容(即图39中斜线填充区域选中的图像)。
手机可以响应于用户的内容选定操作,显示操作选项。手机提供的操作选项包括“复制”、“分享”、“全选”、“新建投屏”、“合并投屏”这五个选项。
当用户点击“合并投屏”选项时,手机将当前内容选择框选定的文字和图像作为新的投屏内容。并且,手机检测到已经与智能电视建立了投屏连接,所以手机跳过建立投屏连接的流程。
如图42所示,手机根据智能电视的投屏配置信息对新的投屏内容进行排版,并将排版后的投屏内容与上一次投屏操作的第一内容合并,将排版后的投屏内容续接在上一次投屏操作的第一内容的尾部,得到新的第一内容。
然后,手机通过图形处理器对新的第一内容的图形数据进行渲染,并通过窗口合成器对渲染后的图形数据进行合成,得到新的投屏图像。
如图43所示,手机根据预设投屏协议对新的投屏图像进行编码,得到新的编码数据,并向智能电视发送新的编码数据。
智能电视在接收到新的编码数据之后,根据预设投屏协议对新的编码数据进行解码,得到新的投屏图像,将新的投屏图像替换上一次接收到的投屏图像,将新的投屏图像显示在本设备的屏幕上。
综上,在本申请实施例提供的投屏方法中,用户可以通过内容选定操作自行选定投屏内容,第一设备响应于用户的内容选定操作,确定投屏内容,根据投屏内容生成投屏图像。第一设备生成的投屏图像仅包含被内容选定操作选中的内容,不包含其他未被选中的内容,用户可以针对性地进行投屏,从而避免投屏图像中出现用户不想投屏的内容。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
以下,将从第一设备的角度,对本申请实施例提供的另一投屏方法进行详细说明。请参阅图44,本实施例提供的投屏方法包括:
S4401、第一设备将第一显示界面中被内容选定操作选中的对象确定为投屏内容,第一显示界面为第一设备当前屏幕显示的界面;
S4402、第一设备对投屏内容进行处理,得到投屏图像;
S4403、第一设备向第二设备发送投屏图像,投屏图像用于在第二设备的屏幕上显示。
图45是本申请实施例提供的另一种电子设备的示意图。电子设备4500可以包括处理器4510,外部存储器接口4520,内部存储器4521,通用串行总线(universal serial bus,USB)接口4530,充电管理模块4540,电源管理模块4541,电池4542,天线1,天线2,移动通信模块4550,无线通信模块4560,音频模块4570,扬声器4570A,受话器4570B,麦克风4570C,耳机接口4570D,传感器模块4580,按键4590,马达4591,指示器4592,摄像头4593,显示屏4594,以及用户标识模块(subscriber identification module,SIM)卡接口4595等。其中传感器模块4580可以包括压力传感器4580A,陀螺仪传感器4580B,气压传感器4580C,磁传感器4580D,加速度传感器4580E,距离传感器4580F,接近光传感器4580G,指纹传感器4580H,温度传感器4580J,触摸传感器4580K,环境光传感器4580L,骨传导传感器4580M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备4500的具体限定。在本申请另一些实施例中,电子设备4500可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器4510可以包括一个或多个处理单元,例如:处理器4510可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器4510中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器4510中的存储器为高速缓冲存储器。该存储器可以保存处理器4510刚用过或循环使用的指令或数据。如果处理器4510需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器4510的等待时间,因而提高了***的效率。
在一些实施例中,处理器4510可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器4510可以包含多组I2C总线。处理器4510可以通过不同的I2C总线接口分别耦合触摸传感器4580K,充电器,闪光灯,摄像头4593等。例如:处理器4510可以通过I2C接口耦合触摸传 感器4580K,使处理器4510与触摸传感器4580K通过I2C总线接口通信,实现电子设备4500的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器4510可以包含多组I2S总线。处理器4510可以通过I2S总线与音频模块4570耦合,实现处理器4510与音频模块4570之间的通信。在一些实施例中,音频模块4570可以通过I2S接口向无线通信模块4560传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块4570与无线通信模块4560可以通过PCM总线接口耦合。在一些实施例中,音频模块4570也可以通过PCM接口向无线通信模块4560传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器4510与无线通信模块4560。例如:处理器4510通过UART接口与无线通信模块4560中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块4570可以通过UART接口向无线通信模块4560传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器4510与显示屏4594,摄像头4593等***器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器4510和摄像头4593通过CSI接口通信,实现电子设备4500的拍摄功能。处理器4510和显示屏4594通过DSI接口通信,实现电子设备4500的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器4510与摄像头4593,显示屏4594,无线通信模块4560,音频模块4570,传感器模块4580等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口4530是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口4530可以用于连接充电器为电子设备4500充电,也可以用于电子设备4500与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备4500的结构限定。在本申请另一些实施例中,电子设备4500也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块4540用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块4540可以通过USB接口4530接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块4540可以通过电子设备4500的无线充电线圈接收无线充电输入。充电管理模块4540为电池4542充电的同时,还可以通过电源管理模块4541为电子设备供电。
电源管理模块4541用于连接电池4542,充电管理模块4540与处理器4510。电源管理模块4541接收电池4542和/或充电管理模块4540的输入,为处理器4510,内部 存储器4521,显示屏4594,摄像头4593,和无线通信模块4560等供电。电源管理模块4541还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块4541也可以设置于处理器4510中。在另一些实施例中,电源管理模块4541和充电管理模块4540也可以设置于同一个器件中。
电子设备4500的无线通信功能可以通过天线1,天线2,移动通信模块4550,无线通信模块4560,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备4500中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块4550可以提供应用在电子设备4500上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块4550可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块4550可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块4550还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块4550的至少部分功能模块可以被设置于处理器4510中。在一些实施例中,移动通信模块4550的至少部分功能模块可以与处理器4510的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器4570A,受话器4570B等)输出声音信号,或通过显示屏4594显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器4510,与移动通信模块4550或其他功能模块设置在同一个器件中。
无线通信模块4560可以提供应用在电子设备4500上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块4560可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块4560经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器4510。无线通信模块4560还可以从处理器4510接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备4500的天线1和移动通信模块4550耦合,天线2和无线通信模块4560耦合,使得电子设备4500可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division  multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备4500通过GPU,显示屏4594,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏4594和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器4510可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏4594用于显示图像,视频等。显示屏4594包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matri45 organic light emitting diode的,AMOLED),柔性发光二极管(fle45light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备4500可以包括1个或N个显示屏4594,N为大于1的正整数。
电子设备4500可以通过ISP,摄像头4593,视频编解码器,GPU,显示屏4594以及应用处理器等实现拍摄功能。
ISP用于处理摄像头4593反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头4593中。
摄像头4593用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-o45ide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备4500可以包括1个或N个摄像头4593,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备4500在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备4500可以支持一种或多种视频编解码器。这样,电子设备4500可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture e45perts group,MPEG)45,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构, 例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备4500的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口4520可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备4500的存储能力。外部存储卡通过外部存储器接口4520与处理器4510通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器4521可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器4521可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备4500使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器4521可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器4510通过运行存储在内部存储器4521的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备4500的各种功能应用以及数据处理。
电子设备4500可以通过音频模块4570,扬声器4570A,受话器4570B,麦克风4570C,耳机接口4570D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块4570用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块4570还可以用于对音频信号编码和解码。在一些实施例中,音频模块4570可以设置于处理器4510中,或将音频模块4570的部分功能模块设置于处理器4510中。
扬声器4570A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备4500可以通过扬声器4570A收听音乐,或收听免提通话。
受话器4570B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备4500接听电话或语音信息时,可以通过将受话器4570B靠近人耳接听语音。
麦克风4570C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风4570C发声,将声音信号输入到麦克风4570C。电子设备4500可以设置至少一个麦克风4570C。在另一些实施例中,电子设备4500可以设置两个麦克风4570C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备4500还可以设置三个,四个或更多麦克风4570C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口4570D用于连接有线耳机。耳机接口4570D可以是USB接口4530,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器4580A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器4580A可以设置于显示屏4594。压力传感器4580A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器4580A,电极之间的电容改变。电子设备4500根据电容的变化确定压力的强度。当有触摸操作作用 于显示屏4594,电子设备4500根据压力传感器4580A检测所述触摸操作强度。电子设备4500也可以根据压力传感器4580A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器4580B可以用于确定电子设备4500的运动姿态。在一些实施例中,可以通过陀螺仪传感器4580B确定电子设备4500围绕三个轴(即,45,y和z轴)的角速度。陀螺仪传感器4580B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器4580B检测电子设备4500抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备4500的抖动,实现防抖。陀螺仪传感器4580B还可以用于导航,体感游戏场景。
气压传感器4580C用于测量气压。在一些实施例中,电子设备4500通过气压传感器4580C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器4580D包括霍尔传感器。电子设备4500可以利用磁传感器4580D检测翻盖皮套的开合。在一些实施例中,当电子设备4500是翻盖机时,电子设备4500可以根据磁传感器4580D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器4580E可检测电子设备4500在各个方向上(一般为三轴)加速度的大小。当电子设备4500静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器4580F,用于测量距离。电子设备4500可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备4500可以利用距离传感器4580F测距以实现快速对焦。
接近光传感器4580G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备4500通过发光二极管向外发射红外光。电子设备4500使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备4500附近有物体。当检测到不充分的反射光时,电子设备4500可以确定电子设备4500附近没有物体。电子设备4500可以利用接近光传感器4580G检测用户手持电子设备4500贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器4580G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器4580L用于感知环境光亮度。电子设备4500可以根据感知的环境光亮度自适应调节显示屏4594亮度。环境光传感器4580L也可用于拍照时自动调节白平衡。环境光传感器4580L还可以与接近光传感器4580G配合,检测电子设备4500是否在口袋里,以防误触。
指纹传感器4580H用于采集指纹。电子设备4500可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器4580J用于检测温度。在一些实施例中,电子设备4500利用温度传感器4580J检测的温度,执行温度处理策略。例如,当温度传感器4580J上报的温度超 过阈值,电子设备4500执行降低位于温度传感器4580J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备4500对电池4542加热,以避免低温导致电子设备4500异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备4500对电池4542的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器4580K,也称“触控器件”。触摸传感器4580K可以设置于显示屏4594,由触摸传感器4580K与显示屏4594组成触摸屏,也称“触控屏”。触摸传感器4580K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏4594提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器4580K也可以设置于电子设备4500的表面,与显示屏4594所处的位置不同。
骨传导传感器4580M可以获取振动信号。在一些实施例中,骨传导传感器4580M可以获取人体声部振动骨块的振动信号。骨传导传感器4580M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器4580M也可以设置于耳机中,结合成骨传导耳机。音频模块4570可以基于所述骨传导传感器4580M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器4580M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键4590包括开机键,音量键等。按键4590可以是机械按键。也可以是触摸式按键。电子设备4500可以接收按键输入,产生与电子设备4500的用户设置以及功能控制有关的键信号输入。
马达4591可以产生振动提示。马达4591可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏4594不同区域的触摸操作,马达4591也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器4592可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口4595用于连接SIM卡。SIM卡可以通过***SIM卡接口4595,或从SIM卡接口4595拔出,实现和电子设备4500的接触和分离。电子设备4500可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口4595可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口4595可以同时***多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口4595也可以兼容不同类型的SIM卡。SIM卡接口4595也可以兼容外部存储卡。电子设备4500通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备4500采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备4500中,不能和电子设备4500分离。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个 处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/电子设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/电子设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读存储介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不包括电载波信号和电信信号。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围 并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (23)

  1. 一种投屏方法,其特征在于,包括:
    第一设备将第一显示界面中被内容选定操作选中的对象确定为投屏内容,所述第一显示界面为所述第一设备当前屏幕显示的界面;
    所述第一设备对所述投屏内容进行处理,得到投屏图像;
    所述第一设备向第二设备发送所述投屏图像,所述投屏图像用于在所述第二设备的屏幕上显示。
  2. 根据权利要求1所述的投屏方法,其特征在于,所述投屏内容为文字和/或图像。
  3. 根据权利要求1所述的投屏方法,其特征在于,在所述第一设备将第一显示界面中被内容选定操作选中的对象确定为投屏内容之后,还包括:
    所述第一设备检测是否已建立投屏连接;
    若所述第一设备未建立投屏连接,则所述第一设备执行搜索操作,并显示第一列表,所述第一列表用于显示被搜索到的电子设备;
    所述第一设备响应于对所述第一列表的选择操作,从所述被搜索到的电子设备中确定第二设备;
    所述第一设备与所述第二设备建立投屏连接。
  4. 根据权利要求1所述的投屏方法,其特征在于,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
    所述第一设备获取所述投屏内容所在的目标图层;
    所述第一设备对所述目标图层进行合成,得到投屏图像。
  5. 根据权利要求1所述的投屏方法,其特征在于,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
    所述第一设备获取所述第一显示界面的显示图像以及所述投屏内容的位置信息;
    所述第一设备根据所述位置信息对所述显示图像进行截取操作,得到目标区域的截取图形数据,所述目标区域为所述投屏内容所在的区域;
    所述第一设备对所述截取图形数据进行合成,得到投屏图像。
  6. 根据权利要求1所述的投屏方法,其特征在于,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
    所述第一设备获取所述投屏内容所在的目标图层以及所述投屏内容的位置信息;
    所述第一设备根据所述位置信息对所述目标图层进行截取操作,得到目标区域的区域图形数据,所述目标区域为所述投屏内容所在的区域;
    所述第一设备对所述区域图形数据进行合成,得到投屏图像。
  7. 根据权利要求1所述的投屏方法,其特征在于,所述第一设备对所述投屏内容进行处理,得到投屏图像,包括:
    所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容;
    所述第一设备对所述第一内容对应的图形数据进行渲染,得到投屏图像。
  8. 根据权利要求7所述的投屏方法,其特征在于,所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容,包 括:
    所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
    所述第一设备将所述排版后的投屏内容设置为第一内容。
  9. 根据权利要求7所述的投屏方法,其特征在于,所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容,包括:
    所述第一设备获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
    所述第一设备对上一次生成的第一内容和所述排版后的投屏内容进行组合,得到新的第一内容。
  10. 根据权利要求7至9中任一项所述的投屏方法,其特征在于,所述投屏配置信息包括所述第二设备的屏幕分辨率和/或所述第二设备的屏幕尺寸。
  11. 一种投屏装置,其特征在于,包括:
    内容选定模块,用于将第一显示界面中被内容选定操作选中的对象确定为投屏内容,所述第一显示界面为第一设备当前屏幕显示的界面;
    图像生成模块,用于对所述投屏内容进行处理,得到投屏图像;
    图像发送模块,用于向第二设备发送所述投屏图像,所述投屏图像用于在所述第二设备的屏幕上显示。
  12. 根据权利要求11所述的投屏装置,其特征在于,所述投屏内容为文字和/或图像。
  13. 根据权利要求11所述的投屏装置,其特征在于,所述装置还包括:
    连接检测模块,用于检测是否已建立投屏连接;
    设备搜索模块,用于若所述第一设备未建立投屏连接,则执行搜索操作,并显示第一列表,所述第一列表用于显示被搜索到的电子设备;
    设备选择模块,用于响应于对所述第一列表的选择操作,从所述被搜索到的电子设备中确定第二设备;
    连接建立模块,用于与所述第二设备建立投屏连接。
  14. 根据权利要求11所述的投屏装置,其特征在于,所述图像生成模块,包括:
    目标图层子模块,用于获取所述投屏内容所在的目标图层;
    图层合成子模块,用于对所述目标图层进行合成,得到投屏图像。
  15. 根据权利要求11所述的投屏装置,其特征在于,所述图像生成模块,包括:
    位置获取子模块,用于获取所述第一显示界面的显示图像以及所述投屏内容的位置信息;
    位置截取子模块,用于根据所述位置信息对所述显示图像进行截取操作,得到目标区域的截取图形数据,所述目标区域为所述投屏内容所在的区域;
    截取合成子模块,用于对所述截取图形数据进行合成,得到投屏图像。
  16. 根据权利要求11所述的投屏装置,其特征在于,所述图像生成模块,包括:
    目标获取子模块,用于获取所述投屏内容所在的目标图层以及所述投屏内容的位 置信息;
    区域截取子模块,用于根据所述位置信息对所述目标图层进行截取操作,得到目标区域的区域图形数据,所述目标区域为所述投屏内容所在的区域;
    图像合成子模块,用于对所述区域图形数据进行合成,得到投屏图像。
  17. 根据权利要求11所述的投屏装置,其特征在于,所述图像生成模块,包括:
    内容排版子模块,用于获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到第一内容;
    图像渲染子模块,用于对所述第一内容对应的图形数据进行渲染,得到投屏图像。
  18. 根据权利要求17所述的投屏装置,其特征在于,所述内容排版子模块,包括:
    第一排版子模块,用于获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
    内容设置子模块,用于将所述排版后的投屏内容设置为第一内容。
  19. 根据权利要求17所述的投屏装置,其特征在于,所述内容排版子模块,包括:
    第二排版子模块,用于获取第二设备的投屏配置信息,根据所述投屏配置信息对所述投屏内容进行排版,得到排版后的投屏内容;
    内容合并子模块,用于对上一次生成的第一内容和所述排版后的投屏内容进行组合,得到新的第一内容。
  20. 根据权利要求17至19中任一项所述的投屏装置,其特征在于,所述投屏配置信息包括所述第二设备的屏幕分辨率和/或所述第二设备的屏幕尺寸。
  21. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至10任一项所述的方法。
  22. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至10任一项所述的方法。
  23. 一种芯片***,其特征在于,所述芯片***包括存储器和处理器,所述处理器执行所述存储器中存储的计算机程序,以实现如权利要求1至10任一项所述的方法。
PCT/CN2021/129765 2020-11-13 2021-11-10 投屏方法、装置、电子设备及计算机可读存储介质 WO2022100610A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011271351.8 2020-11-13
CN202011271351.8A CN114489533A (zh) 2020-11-13 2020-11-13 投屏方法、装置、电子设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022100610A1 true WO2022100610A1 (zh) 2022-05-19

Family

ID=81491289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129765 WO2022100610A1 (zh) 2020-11-13 2021-11-10 投屏方法、装置、电子设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114489533A (zh)
WO (1) WO2022100610A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116679895A (zh) * 2022-10-26 2023-09-01 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***
CN117135396A (zh) * 2023-02-14 2023-11-28 荣耀终端有限公司 投屏方法及其相关设备
WO2024078337A1 (zh) * 2022-10-09 2024-04-18 华为技术有限公司 一种显示屏选择方法及电子设备
CN116679895B (zh) * 2022-10-26 2024-06-07 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911559A (zh) * 2022-05-16 2022-08-16 深圳市宝泽科技有限公司 一种基于无规则画面摆放策略的显示画面修正方法及装置
CN117492672A (zh) * 2022-07-26 2024-02-02 华为技术有限公司 一种投屏方法及电子设备
CN115543241A (zh) * 2022-08-31 2022-12-30 荣耀终端有限公司 用于投屏场景的设备扫描方法及装置
CN115576516A (zh) * 2022-12-12 2023-01-06 深圳开鸿数字产业发展有限公司 图像合成方法、图像合成***、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012181473A (ja) * 2011-03-03 2012-09-20 Nikon Corp 画像投影装置
CN108958684A (zh) * 2018-06-22 2018-12-07 维沃移动通信有限公司 投屏方法及移动终端
CN109508162A (zh) * 2018-10-12 2019-03-22 福建星网视易信息***有限公司 一种投屏显示方法、***及存储介质
CN111580765A (zh) * 2020-04-27 2020-08-25 Oppo广东移动通信有限公司 投屏方法、投屏装置、存储介质、被投屏设备与投屏设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012181473A (ja) * 2011-03-03 2012-09-20 Nikon Corp 画像投影装置
CN108958684A (zh) * 2018-06-22 2018-12-07 维沃移动通信有限公司 投屏方法及移动终端
CN109508162A (zh) * 2018-10-12 2019-03-22 福建星网视易信息***有限公司 一种投屏显示方法、***及存储介质
CN111580765A (zh) * 2020-04-27 2020-08-25 Oppo广东移动通信有限公司 投屏方法、投屏装置、存储介质、被投屏设备与投屏设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024078337A1 (zh) * 2022-10-09 2024-04-18 华为技术有限公司 一种显示屏选择方法及电子设备
CN116679895A (zh) * 2022-10-26 2023-09-01 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***
CN116679895B (zh) * 2022-10-26 2024-06-07 荣耀终端有限公司 一种协同业务的调度方法、电子设备及协同***
CN117135396A (zh) * 2023-02-14 2023-11-28 荣耀终端有限公司 投屏方法及其相关设备

Also Published As

Publication number Publication date
CN114489533A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
WO2020224486A1 (zh) 一种通话方法、设备及***
US11669242B2 (en) Screenshot method and electronic device
WO2020168965A1 (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
WO2022100610A1 (zh) 投屏方法、装置、电子设备及计算机可读存储介质
WO2020134872A1 (zh) 一种消息处理的方法、相关装置及***
WO2021052214A1 (zh) 一种手势交互方法、装置及终端设备
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
WO2021197139A1 (zh) 一种推荐服务的方法、电子设备和***
US20230189366A1 (en) Bluetooth Communication Method, Terminal Device, and Computer-Readable Storage Medium
WO2021037146A1 (zh) 一种移动终端的文件共享方法及设备
WO2020056684A1 (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
WO2022116930A1 (zh) 内容共享方法、电子设备及存储介质
CN114258671A (zh) 通话方法及装置
CN114115770A (zh) 显示控制的方法及相关装置
WO2020221062A1 (zh) 一种导航操作方法及电子设备
WO2022143180A1 (zh) 协同显示方法、终端设备及计算机可读存储介质
WO2020062304A1 (zh) 一种文件传输方法及电子设备
WO2022161006A1 (zh) 合拍的方法、装置、电子设备和可读存储介质
WO2022135144A1 (zh) 自适应显示方法、电子设备及存储介质
WO2021218544A1 (zh) 一种提供无线上网的***、方法及电子设备
CN111885768B (zh) 调节光源的方法、电子设备和***
WO2021052408A1 (zh) 一种电子设备显示方法及电子设备
CN114489876A (zh) 一种文本输入的方法、电子设备和***
WO2023020420A1 (zh) 音量显示方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21891130

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21891130

Country of ref document: EP

Kind code of ref document: A1