WO2023169276A1 - Screen projection method, terminal device, and computer-readable storage medium - Google Patents

Screen projection method, terminal device, and computer-readable storage medium Download PDF

Info

Publication number
WO2023169276A1
WO2023169276A1 PCT/CN2023/078992 CN2023078992W WO2023169276A1 WO 2023169276 A1 WO2023169276 A1 WO 2023169276A1 CN 2023078992 W CN2023078992 W CN 2023078992W WO 2023169276 A1 WO2023169276 A1 WO 2023169276A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
terminal device
encoded data
data
interface
Prior art date
Application number
PCT/CN2023/078992
Other languages
French (fr)
Chinese (zh)
Inventor
李建钊
李自然
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023169276A1 publication Critical patent/WO2023169276A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • This application belongs to the field of terminal technology, and particularly relates to screen projection methods, terminal equipment and computer-readable storage media.
  • Screen sharing between terminal devices has become a common function in people's daily lives.
  • the main process of existing screencasting is: the screencasting application on the sending end performs layer drawing, and after the drawing is completed, notifies the SurfaceFlinger component to synthesize the drawn layer data. After the synthesis is completed, the SurfaceFlinger component notifies the encoder to encode the synthesized image. After the encoding is completed, the encoder notifies the screen casting application to send the encoded video stream and other encoded data to the receiving end.
  • problems such as large time delays in the existing screencasting process, which affects the user experience.
  • Embodiments of the present application provide a screen projection method, a terminal device, and a computer-readable storage medium, which can solve the existing problem of large screen projection delay.
  • embodiments of the present application provide a screen projection system, including a first terminal device and a second terminal device;
  • the first terminal device is configured to perform layer drawing on the interface to be projected when detecting a screen casting instruction, and obtain layer data corresponding to the interface to be projected;
  • the first terminal device is also configured to perform image synthesis according to the first part of the layer data to obtain a first image
  • the first terminal device is further configured to perform encoding according to the first image, obtain first encoded data, and send the first encoded data to the second terminal device;
  • the first terminal device is also configured to perform encoding according to the second part of the layer data when encoding according to the first image, or when sending the first encoding data to the second terminal device.
  • Image synthesis to obtain a second image, and encoding according to the second image to obtain second encoded data;
  • the first terminal device is also configured to send the second encoded data to the second terminal device;
  • the second terminal device is used to obtain the first encoded data and the second encoded data, and decode the first encoded data and the second encoded data to obtain the first image and the second encoded data. Describing the second image;
  • the second terminal device is further configured to obtain the interface to be projected based on the first image and the second image, and display the interface to be projected.
  • the first terminal device when detecting a screen projection instruction, can draw the layer of the interface to be projected, obtain the layer data corresponding to the interface to be projected, and perform image processing based on the first part of the layer data. Synthesize to get the first image.
  • the first terminal device may perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the second terminal device.
  • the first terminal device can also continue to perform image synthesis according to the second part of the layer data to obtain the second image, and perform the processing according to the second image. Encoding, obtaining second encoded data, and sending the second encoded data to the second terminal device.
  • the second terminal device When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and obtain the first image and the second image according to the first The image and the second image are used to obtain the interface to be projected onto the screen. That is, the first terminal device can perform synthesis, encoding, and transmission according to the first part and the second part respectively, and the synthesis, encoding, transmission, and other processing according to the second part are the same as the encoding according to the first image corresponding to the first part. , sending and other processing processes can be executed in parallel to achieve parallel execution of the image synthesis process, encoding process and sending process during screencasting, which can effectively reduce the screencasting delay and improve user experience. users’ screencasting experience.
  • the first terminal device includes an image synthesis module, an encoder, and a first application
  • the first application is an application corresponding to the interface to be projected.
  • the first terminal device is configured to perform image synthesis according to the first part through the image synthesis module to obtain the first image.
  • the first terminal device is configured to perform encoding according to the first image through the encoder to obtain the first encoded data.
  • the first terminal device is configured to send the first encoded data to the second terminal device through the first application.
  • the image synthesis module is specifically configured to perform image synthesis based on the second part to obtain the second image when the encoder performs encoding according to the first image.
  • the image synthesis module is specifically configured to perform image synthesis according to the second part when the first application sends the first encoded data to the second terminal device. , to obtain the second image.
  • the encoder is specifically configured to perform encoding according to the second image when the first application sends the first encoded data to the second terminal device, to obtain Second encoded data.
  • the image synthesis module is further configured to perform image synthesis according to the first part of the layer data in response to the layer data to obtain the first image.
  • the first terminal device is further configured to determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain
  • the first encoded data and the first division information include the division method corresponding to the layer data, the image serial number corresponding to the first image, and the total number of images corresponding to the layer data.
  • the first terminal device is also used to determine third division information corresponding to the layer data, where the third division information includes a division method corresponding to the layer data, The total number of images and the image sending method; sending the third division information to the second terminal device.
  • the second terminal device includes a decoder.
  • the decoder is specifically configured to decode the first encoded data to obtain the first image and first division information corresponding to the first image, where the first division information includes the corresponding layer data.
  • the dividing method the image serial number corresponding to the first image and the total number of images corresponding to the layer data; decoding the second encoded data to obtain the second image and the second image corresponding to the second image.
  • Division information, the second division information includes the division method corresponding to the layer data, the image serial number corresponding to the second image, and the total number of images corresponding to the layer data.
  • the second terminal device is specifically configured to splice the first image and the second image according to the first division information and the second division information to obtain the screen to be projected. interface.
  • the second terminal device is further configured to obtain the third division information sent by the first terminal device.
  • the third division information includes the division method corresponding to the layer data, the total number of images, and Image sending method: splicing the first image and the second image according to the third division information to obtain the interface to be projected.
  • inventions of the present application provide a screen projection method, applied to a first terminal device.
  • the method may include:
  • image synthesis is performed according to the second part of the layer data to obtain a second image, and the second image is obtained according to the second part of the layer data.
  • the second image is encoded to obtain second encoded data;
  • the layer data corresponding to the interface to be projected can at least include a first part and a second part.
  • the first terminal device Image synthesis, encoding, and transmission can be performed according to the first part and the second part respectively, and the image synthesis, encoding, and transmission according to the second part are the same as the encoding, transmission, etc. according to the first image corresponding to the first part.
  • the processing process can be executed in parallel to achieve parallel execution of image synthesis, encoding and sending during screencasting, thereby effectively reducing the screencasting delay and improving the user's screencasting experience.
  • the image synthesis based on the first part of the layer data to obtain the first image may include:
  • the image synthesis module performs image synthesis according to the first part to obtain the first image
  • the encoding according to the first image to obtain the first encoded data includes:
  • the encoder performs encoding according to the first image to obtain the first encoded data
  • the sending of the first encoded data to the second terminal device includes:
  • the first encoded data is sent to the second terminal device through the first application.
  • the image synthesis module When the encoder performs encoding according to the first image, the image synthesis module performs image synthesis according to the second part to obtain the second image.
  • the first terminal device can perform image synthesis through the image synthesis module, encode according to the synthesized first image or second image through the encoder, and use the first application to obtain the encoded third image.
  • the first encoded data or the second encoded data is sent to the second terminal device. Therefore, when the encoder encodes according to the first image, the image synthesis module can continue to perform image synthesis according to the second part of the layer data, so that the encoder encodes according to the first image and the image synthesis module performs encoding according to the second part.
  • the process of image synthesis can be carried out simultaneously to reduce the screen projection delay of the first terminal device and improve the user experience.
  • a second image when encoding according to the first image, or when sending the first encoded data to the second terminal device, according to the second encoding of the layer data Partially perform image synthesis to obtain a second image, which may include:
  • the image synthesis module When the first application sends the first encoded data to the second terminal device, the image synthesis module performs image synthesis according to the second part to obtain the second image.
  • the image synthesis module when the first application sends the first encoded data to the second terminal device, the image synthesis module can continue to perform image synthesis based on the second part of the layer data, so that the first application sends the first encoded data to the second terminal device.
  • the process of encoding data and the process of image synthesis performed by the image synthesis module according to the second part can be performed simultaneously to reduce the screen projection delay of the first terminal device and improve the user experience.
  • encoding according to the second image to obtain second encoded data may include:
  • the encoder When the first application sends the first encoded data to the second terminal device, the encoder performs encoding according to the second image to obtain second encoded data.
  • the encoder when the first application sends the first encoded data to the second terminal device, the encoder can continue to encode according to the second image, so that the process of the first application sending the first encoded data is the same as The encoding process by the encoder based on the second image can be performed at the same time to reduce the screen projection delay of the first terminal device.
  • performing image synthesis based on the first part of the layer data to obtain the first image may include:
  • the image synthesis module In response to the layer data, the image synthesis module performs image synthesis according to the first part of the layer data to obtain the first image.
  • the first terminal device can obtain the current display status of the first terminal device in real time.
  • the display state is the screen-off state, it indicates that the first terminal device does not need to display the interface to be projected simultaneously. Therefore, after the image synthesis module obtains the layer data, it can directly perform image synthesis based on the first part of the layer data without It is necessary to wait for the Vsync signal to effectively reduce the waiting time of the image synthesis module and reduce the delay of screen projection.
  • encoding according to the first image to obtain the first encoded data may include:
  • the corresponding division method Determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain the first encoded data, where the first division information includes the layer data.
  • the first terminal device in order to enable the second terminal device to correctly obtain the interface to be projected after decoding the first image and the second image, and to prevent the interface to be projected from being displayed on the second terminal device
  • the first terminal device can add corresponding division information when encoding according to the first image or the second image through the encoder. Therefore, when the second terminal device decodes the first image and the second image, it can obtain the first division information corresponding to the first image and the second division information corresponding to the second image, so that it can obtain the first division information and the second division information according to the first division information and the second division information. The information is accurately obtained and the interface to be projected onto the screen is obtained.
  • the method may also include:
  • the third division information includes the division method, the total number of images, and the image transmission method corresponding to the layer data;
  • the first terminal device determines the division method, it can separately send the division information corresponding to the division method to the second terminal device. For example, before performing screencasting, the first terminal device may first send the division information to the second terminal device. Therefore, when encoding each image, the encoder no longer needs to add division information to each image separately, so as to reduce the addition of information during the encoding process, thereby improving the encoding speed.
  • embodiments of the present application provide a screen projection method, applied to a second terminal device.
  • the method may include:
  • the first encoded data is the encoded data corresponding to the first part of the layer data
  • the second encoded data is the third part of the layer data.
  • the two parts correspond to the encoded data
  • the layer data is the layer data corresponding to the interface to be projected on the first terminal device;
  • the interface to be projected is obtained, and the interface to be projected is displayed.
  • decoding the first encoded data and the second encoded data respectively to obtain the first image and the second image may include:
  • the first encoded data is decoded to obtain the first image and first division information corresponding to the first image.
  • the first division information includes a division method corresponding to the layer data, the first division information corresponding to the first image.
  • the second encoded data is decoded to obtain the second image and second division information corresponding to the second image.
  • the second division information includes a division method corresponding to the layer data, the second division information corresponding to the second image, and the second division information corresponding to the second image.
  • obtaining the interface to be projected based on the first image and the second image may include:
  • the first image and the second image are spliced according to the first division information and the second division information to obtain the interface to be projected.
  • the method may further include:
  • the third division information includes the division method corresponding to the layer data, the total number of images, and the image transmission method;
  • Obtaining the interface to be projected based on the first image and the second image includes:
  • the first image and the second image are spliced according to the third division information to obtain the interface to be projected.
  • inventions of the present application provide a screen projection device applied to a first terminal device.
  • the device may include:
  • the layer drawing module is used to draw the layer of the interface to be projected when a screen projection instruction is detected, and obtain the layer data corresponding to the interface to be projected;
  • An image synthesis module configured to perform image synthesis according to the first part of the layer data to obtain the first image
  • An encoding module configured to encode according to the first image to obtain first encoded data
  • a sending module configured to send the first encoded data to the second terminal device
  • An image synthesis module also configured to perform image synthesis based on the second part of the layer data when encoding according to the first image or when sending the first encoded data to the second terminal device, Get the second image;
  • An encoding module also configured to perform encoding according to the second image to obtain second encoded data
  • the sending module is also configured to send the second encoded data to the second terminal device.
  • the image synthesis module is specifically configured to perform image synthesis based on the second part to obtain the second image when the encoding module encodes according to the first image.
  • the image synthesis module is specifically configured to perform image synthesis according to the second part when the sending module sends the first encoded data to the second terminal device, The second image is obtained.
  • the encoding module is specifically configured to perform encoding according to the second image when the sending module sends the first encoded data to the second terminal device to obtain the first 2. Coded data.
  • the image synthesis module is further configured to perform image synthesis according to the first part of the layer data in response to the layer data to obtain the first image.
  • the encoding module is also used to determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain the First encoded data, the first division information includes the division method corresponding to the layer data, the image serial number corresponding to the first image, and the total number of images corresponding to the layer data.
  • the device may further include:
  • a division information determination module configured to determine the third division information corresponding to the layer data, where the third division information includes the division method, the total number of images, and the image transmission method corresponding to the layer data;
  • a division information sending module configured to send the third division information to the second terminal device.
  • inventions of the present application provide a screen projection device applied to a second terminal device.
  • the device may include:
  • the coded data acquisition module is used to obtain the first coded data and the second coded data respectively sent by the first terminal device.
  • the first coded data is the coded data corresponding to the first part of the layer data
  • the second coded data is The encoded data corresponding to the second part of the layer data, where the layer data is the layer data corresponding to the interface to be projected on the first terminal device;
  • a decoding module configured to decode the first encoded data and the second encoded data respectively to obtain a first image and a second image
  • An interface display module is configured to obtain the interface to be projected based on the first image and the second image, and display the interface to be projected.
  • the decoding module is specifically configured to decode the first encoded data to obtain the first image and the first division information corresponding to the first image, where the first division information includes the The division method corresponding to the layer data, the image serial number corresponding to the first image and the total number of images corresponding to the layer data;
  • the second encoded data is decoded to obtain the second image and second division information corresponding to the second image.
  • the second division information includes a division method corresponding to the layer data, the second division information corresponding to the second image, and the second division information corresponding to the second image.
  • the interface display module is specifically configured to splice the first image and the second image according to the first division information and the second division information to obtain the interface to be projected. .
  • the device may further include:
  • a division information acquisition module configured to obtain third division information sent by the first terminal device, where the third division information includes the division method corresponding to the layer data, the total number of images, and the image transmission method;
  • the interface display module is also configured to splice the first image and the second image according to the third division information to obtain the interface to be projected.
  • embodiments of the present application provide a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program , causing the terminal device to implement the screen projection method described in any one of the above second aspects, or to implement the screen projection method described in any one of the above third aspects.
  • embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program When executed by a computer, it causes the computer to implement any of the above-mentioned aspects of the second aspect.
  • embodiments of the present application provide a computer program product.
  • the terminal device When the computer program product is run on a terminal device, the terminal device causes the terminal device to execute the screen projection method described in any one of the above second aspects, or execute the above third aspect.
  • Figure 1 is a schematic structural diagram of a terminal device to which the screen projection method provided by an embodiment of the present application is applicable;
  • Figure 2 is a schematic diagram of the software architecture applicable to the screen projection method provided by an embodiment of the present application
  • Figure 3 is a flow chart of a screen projection method
  • Figure 4 is a flow chart of a screen projection method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of the application scenario of the division method provided by the embodiment of the present application.
  • Figure 6 is a flow chart of a screen projection method provided by another embodiment of the present application.
  • Figure 7 is a flow chart of a screen projection method provided by another embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
  • steps involved in the screencasting method provided in the embodiments of this application are only examples. Not all steps are steps that must be performed, or not all information or content in the message is required. During use, It can be increased or decreased as needed.
  • the same step or steps or messages with the same function in the embodiments of the present application can refer to each other between different embodiments. Learn from it.
  • Screen sharing between terminal devices has become a common function in people's daily lives.
  • content or media files (such as libraries, music, videos, etc.) displayed on small-screen terminal devices such as mobile phones or tablets can be projected onto large screens such as TVs and smart screens through wireless projection to improve the viewing effect.
  • you can connect your phone to a laptop or tablet through multi-screen collaboration or you can connect a tablet to a laptop through multi-screen collaboration, and after the connection, you can cast the content on your phone to your laptop or tablet. , or cast the content from the tablet to the laptop for simultaneous display, realizing cross-device resource sharing and collaborative operations.
  • the general process of screencasting is: the screencasting application on the sending end performs layer drawing, and after the drawing is completed, notifies the SurfaceFlinger component to synthesize the drawn layer data. After the synthesis is completed, the SurfaceFlinger component notifies the encoder to encode the synthesized image. After the encoding is completed, the encoder notifies the screen casting application to send the encoded video stream and other encoded data to the receiving end. When the receiving end receives the encoded data, it can decode and display the encoded data.
  • the synthesis process of layer data, the encoding process of images, and the sending process of encoded data all take a long time, and these three are executed sequentially, resulting in a large delay in screencasting. Affect user experience.
  • embodiments of the present application provide a screen casting method, a terminal device, and a computer-readable storage medium.
  • the first application of the first terminal device can perform screen casting.
  • the interface performs layer drawing to obtain layer data corresponding to the interface to be projected, and can send the layer data to the image synthesis module of the first terminal device.
  • the image synthesis module may divide the layer data into at least two parts, and the at least two parts may include a first part and a second part. Subsequently, the image synthesis module may perform image synthesis according to the first part, obtain the first image, and send the first image to the encoder of the first terminal device.
  • the encoder may perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the first application. After receiving the first encoded data, the first application may send the first encoded data to the second terminal device through the transmission module.
  • the image synthesis module can continue to perform image synthesis according to the second part to obtain the second image, and can continue to provide the information to the encoder. Send second image.
  • the encoder may also continue to encode according to the second image, obtain the second encoded data, and send the second encoded data to the first application.
  • the first application can send the second encoded data to the second terminal device through the transmission module.
  • the second terminal device When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and perform the first The image and the second image are spliced and displayed.
  • the layer data can be divided into at least a first part and a second part, and image synthesis, encoding, and transmission can be performed according to the first part and the second part respectively, and the image processing performed according to the second part
  • Processing processes such as synthesis, encoding, and sending can be executed in parallel with processing processes such as encoding and sending based on the first image corresponding to the first part, so as to realize the synthesis process performed by the image synthesis module, the encoding process performed by the encoder, and
  • the sending process of the first application execution is executed in parallel, which can effectively reduce the screen casting delay, improve the user's screen casting experience, and have strong ease of use and practicality.
  • both the first terminal device and the second terminal device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, or a personal computer.
  • Digital assistants personal digital assistants, PDAs
  • desktop computers and other terminal devices with display screens.
  • FIG. 1 shows a structure of a terminal device 100 Schematic diagram.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 140, a wireless communication module 150, and a sensor module. 160, buttons 170, and display 180, etc.
  • the sensor module 160 may include a pressure sensor 160A, a gyroscope sensor 160B, a magnetic sensor 160C, an acceleration sensor 160D, a touch sensor 160E, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal device 100.
  • the terminal device 100 may include more or less components than shown in the figures, or combine some components, or split some components, or arrange different components.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • Processor 110 may couple touch sensor 160E, etc. through a different I2C bus interface.
  • the processor 110 can be coupled to the touch sensor 160E through an I2C interface, so that the processor 110 and the touch sensor 160E communicate through the I2C bus interface to implement the touch function of the terminal device 100 .
  • the MIPI interface can be used to connect the processor 110 and peripheral devices such as the display screen 180 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the display screen 180 communicate through a DSI interface to implement the display function of the terminal device 100 .
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to transmit data between the terminal device 100 and peripheral devices.
  • the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation on the terminal device 100 .
  • the terminal device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the wireless communication function of the terminal device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 140, the wireless communication module 150, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in terminal device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 140 can provide wireless communication solutions including 2G/3G/4G/5G applied on the terminal device 100.
  • the mobile communication module 140 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 140 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 140 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 140 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 140 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor displays images or videos through display screen 180 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 140 or other functional modules.
  • the wireless communication module 150 may provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network, or Wi-Fi direct connection (Wi-Fi peer)) applied on the terminal device 100. -to-peer, Wi-Fi p2p)), Bluetooth (bluetooth, BT), ultra wide band (UWB), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), Near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 150 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 150 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 150 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the terminal device 100 is coupled to the mobile communication module 140, and the antenna 2 is coupled to the wireless communication module 150, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal device 100 implements display functions through a GPU, a display screen 180, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 180 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 180 is used to display images, videos, etc.
  • Display 180 includes a display panel.
  • the display panel can use a liquid crystal display (LCD) or an organic light-emitting diode (OLED).
  • Source matrix organic light emitting diode or active matrix organic light emitting diode active-matrix organic light emitting diode, AMOLED
  • flexible light-emitting diode flexible light-emitting diode, FLED
  • Miniled MicroLed, Micro-oLed
  • quantum dot Made of materials such as quantum dot light emitting diodes (QLED).
  • the terminal device 100 may include 1 or N display screens 180, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the terminal device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • the terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • the NPU can realize intelligent cognitive applications of the terminal device 100, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as an image playback function, etc.), etc.
  • the storage data area may store data created during use of the terminal device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the terminal device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the pressure sensor 160A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • the pressure sensor 160A may be disposed on the display screen 180 .
  • pressure sensors 160A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the terminal device 100 determines the intensity of the pressure based on the change in capacitance.
  • the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 160A.
  • the terminal device 100 may also calculate the touched position based on the detection signal of the pressure sensor 160A.
  • touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 160B may be used to determine the motion posture of the terminal device 100 .
  • the angular velocity of the terminal device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 160B can be used for navigation and somatosensory game scenes.
  • Magnetic sensor 160C includes a Hall sensor.
  • the terminal device 100 can detect the opening and closing of the flip holster using the magnetic sensor 160C.
  • the terminal device 100 may detect opening and closing of the flip according to the magnetic sensor 160C. Then, based on the detected opening and closing status of the leather case or the opening and closing status of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 160D can detect the acceleration of the terminal device 100 in various directions (generally three axes). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the terminal device, used in horizontal and vertical screen switching, pedometer and other applications.
  • Touch sensor 160E is also called a "touch device”.
  • the touch sensor 160E can be disposed on the display screen 180.
  • the touch sensor 160E and the display screen 180 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 160E is used to detect a touch operation on or near the touch sensor 160E.
  • Touch sensor 160E may pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through the display screen 180 .
  • the touch sensor 160E may also be disposed on the surface of the terminal device 100 in a position different from that of the display screen 180 .
  • the buttons 170 include a power button, a volume button, etc.
  • the key 170 may be a mechanical key. It can also be a touch button.
  • the terminal device 100 may receive key input and generate key signal input related to user settings and function control of the terminal device 100 .
  • the software system of the terminal device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the terminal device 100 .
  • Figure 2 is a software structure block diagram of the terminal device 100 according to the embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, calling, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, content provider, view system, phone manager, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the terminal device 100 .
  • call status management including connected, hung up, etc.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a prompt sound is emitted, the terminal device vibrates, and the indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine combines the Java application layer and the application framework layer
  • the file executes as a binary file.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, H.265, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
  • the interface to be projected may refer to an interface corresponding to the first application of the first terminal device.
  • the interface may be an interface currently being displayed on the first terminal device, or may be an interface that is about to be displayed on the first terminal device.
  • the first application of the first terminal device when detecting a screen casting instruction, can perform layer drawing on the interface to be projected to obtain layer data corresponding to the interface to be projected.
  • the projection instruction is used to instruct the projection interface corresponding to the first application to be projected to the second terminal device for synchronous display.
  • the image synthesis module of the first terminal device (such as the SurfaceFlinger component of the first terminal device, etc.) can obtain the layer data and perform image synthesis on the layer data to obtain image A (that is, sent to the second terminal device for synchronous display image) and image B (that is, the image displayed by the first terminal device itself).
  • image A that is, sent to the second terminal device for synchronous display image
  • image B that is, the image displayed by the first terminal device itself.
  • the SurfaceFlinger component can send image A to the encoder of the first terminal device.
  • the encoder can encode the image A to obtain encoded data such as a video stream, for example, a video stream in a format such as H.264 or H.265. Then, the first application can obtain the encoded data such as the video stream encoded by the encoder, and send the encoded data such as the video stream to the second terminal device through the transmission module of the first terminal device.
  • the second terminal device can receive the encoded data such as the video stream sent by the first terminal device, decode the encoded data such as the video stream, obtain a decoded image, and display the decoded image.
  • the SurfaceFlinger component can first synthesize the layer data to obtain image A, and then synthesize the layer data to obtain image B to reduce the waiting time required to synthesize image A
  • the SurfaceFlinger component performs The synthesis process (i.e., the process of synthesizing image A), the encoding process performed by the encoder (i.e., the process of encoding image A), and the sending process performed by the first application (i.e., the encoding data corresponding to image A is sent to the second terminal device)
  • the process is generally executed sequentially. That is, the SurfaceFlinger component first synthesizes all layer data to obtain a complete image A. Then, the encoder encodes the complete image A to obtain encoded data such as a video stream. Finally, the first application sends the encoded data such as the video stream to the second terminal device.
  • the time for the SurfaceFlinger component to synthesize image A, the time for the encoder to encode image A, and the time for the first application to send encoded data such as video streams are all relatively long.
  • the time for the SurfaceFlinger component to synthesize image A can reach 8ms.
  • the time it takes for the encoder to encode image A can reach 10ms
  • the time it takes for the first application to send the encoded data such as the video stream corresponding to image A can reach 10ms. Therefore, the synthesis process performed by the SurfaceFlinger component and the encoding performed by the encoder When the process and the sending process executed by the first application are executed sequentially, it will cause a large delay in screen casting and affect the user experience.
  • the image synthesis cycle of the SurfaceFlinger component needs to be the same as the refresh frequency of the first terminal device. That is, when the SurfaceFlinger component synthesizes layer data, it also needs to wait for the hardware synthesis module. (Hardware Composer, HWC) triggered Vsync signal, that is, the SurfaceFlinger component must receive the Vsync signal When, the layer data starts to be synthesized, and image A and image B are synthesized within a Vsync signal period. HWC generally triggers the Vsync signal periodically. As shown in Figure 3, when the refresh frequency of the first terminal device is 60Hz, the triggering period of the Vsync signal is generally 16ms.
  • HWC generally triggers the Vsync signal every 16ms to notify the SurfaceFlinger component to perform the synthesis process through the Vsync signal, causing the SurfaceFlinger component to wait for a long time when compositing layer data (for example, the longest waiting time can Reaching 16ms), resulting in a large delay in screencasting and affecting the user experience.
  • embodiments of the present application provide a screen casting method to effectively reduce screen casting delay and improve user experience.
  • FIG. 4 shows a flow chart of a screen projection method provided by an embodiment of the present application.
  • the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
  • the first application of the first terminal device when detecting the screencasting instruction, can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and can send the screencasting command to the first terminal device.
  • the image composition module sends layer data.
  • the image synthesis module may divide the layer data into at least two parts, and the at least two parts may include a first part and a second part. Subsequently, the image synthesis module may perform image synthesis according to the first part, obtain the first image, and send the first image to the encoder of the first terminal device.
  • the encoder may perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the first application. After receiving the first encoded data, the first application may send the first encoded data to the second terminal device through the transmission module.
  • the image synthesis module can also continue to perform image synthesis according to the second part to obtain the second image, and can continue to The encoder sends the second image.
  • the encoder can continue to encode according to the second image, obtain the second encoded data, and send the second encoded data to the first application.
  • the first application can send the second encoded data to the second terminal device through the transmission module.
  • the second terminal device When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and perform the first The image and the second image are spliced and displayed.
  • the image synthesis module performs image synthesis according to the first part to obtain the first image. It may directly perform image synthesis on all the layer data of the first part to obtain the first image. Alternatively, the first image may be obtained by performing image synthesis on part of the layer data in the first part. For example, the different parts may be determined based on the first part of the current frame and the first part of the previous frame, and the images corresponding to the different parts may be determined.
  • the layer data is used for image synthesis to obtain the first image.
  • the current frame refers to the interface to be projected that is currently to be sent to the second terminal device
  • the previous frame refers to the interface to be projected that is sent to the second terminal device before the current frame is sent.
  • the second terminal device when it decodes and obtains the first image corresponding to the current frame, it can perform restoration processing based on the first image corresponding to the previous frame to obtain the complete first image.
  • the image synthesis module performs image synthesis based on the second part to obtain the second image. It may directly perform image synthesis on all the layer data in the second part to obtain the second image; or it may perform image synthesis on all the layer data in the second part to obtain the second image. Image synthesis is performed on part of the layer data to obtain a second image.
  • the encoder performs encoding according to the first image to obtain the first encoded data. It may directly encode all the data of the first image to obtain the first encoded data. Alternatively, part of the data in the first image may be encoded to obtain the first encoded data. For example, the first image corresponding to the current frame may be determined based on the first image corresponding to the current frame and the first image corresponding to the previous frame. The difference part between the first image corresponding to the previous frame, and the data corresponding to the difference part is encoded to obtain the first encoded data.
  • the second terminal device when the second terminal device obtains the first encoded data corresponding to the current frame (that is, the first encoded data corresponding to the difference part), it can decode it in combination with the first encoded data corresponding to the previous frame to obtain the complete first encoded data. .
  • the encoder performs encoding according to the second image to obtain the second encoded data. It may directly encode all the data of the second image to obtain the second encoded data. data; alternatively, part of the data in the second image may be encoded to obtain the second encoded data.
  • the electronic device combines the first image corresponding to the previous frame to restore the first image corresponding to the current frame and combines the first encoded data corresponding to the previous frame to restore the first encoded data corresponding to the current frame.
  • the method of decoding to obtain the complete first encoded data can be specifically determined by technicians based on actual scenarios, and the embodiments of the present application do not impose any restrictions on this.
  • the layer data can be divided into at least a first part and a second part, and image synthesis, encoding, and sending can be performed based on the first part and the second part respectively, and the image synthesis can be performed according to the second part.
  • encoding, sending and other processing processes can be executed in parallel with the encoding, sending and other processing processes according to the first image corresponding to the first part, so as to realize the synthesis process executed by the image synthesis module, the encoding process executed by the encoder and the third
  • the sending process of one application execution is executed in parallel, which can effectively reduce the screen casting delay and improve the user's screen casting experience.
  • the above-mentioned screen projection instruction can be used to instruct the first terminal device to project the screen interface to be projected corresponding to the first application to the second terminal device for synchronous display.
  • the first application can be any application in the first terminal device, that is, the first terminal device can project the interface of any application to the second terminal device for synchronous display.
  • the screen projection instruction can be generated by triggering by the user, or can be generated by default by the first terminal device.
  • the user when the user needs to project the interface currently displayed on the first terminal device to the second terminal device for display, the user can touch the projection button in the first terminal device.
  • the first terminal device detects that the screen casting button is touched, it may generate a screen casting instruction to instruct the first terminal device to perform a screen casting operation.
  • the user when the user needs to project the interface currently displayed on the first terminal device to the second terminal device for display, the user can touch the first preset area of the first terminal device to the second preset area of the second terminal device.
  • the first terminal device detects the touch operation, it can generate a screen casting instruction to instruct the first terminal device to perform a screen casting operation.
  • the first preset area and the second preset area can be specifically set according to the actual situation.
  • the first preset area can be set to the area corresponding to the NFC chip in the first terminal device
  • the second preset area can be set to the area corresponding to the NFC chip in the first terminal device.
  • the area is set to the area corresponding to the NFC chip in the second terminal device.
  • the user can set a time on the first terminal device to automatically project the screen to the second terminal device (for example, the time can be set to 21:00 of the day).
  • the first terminal device may actively generate a screen casting instruction to instruct the first terminal device to perform a screen casting operation.
  • the transmission module may be a wired communication module, a mobile communication module, or a wireless communication module. That is, the first application can use wired communication methods such as USB, or mobile communication methods such as 2G/3G/4G/5G/6G, or wireless communication methods such as Bluetooth, Wi-Fi, Wi-Fi p2p, and UWB.
  • the first encoded data and the second encoded data are sent to the second terminal device.
  • the image synthesis module may be the SurfaceFlinger component of the first terminal device.
  • layer data refers to data corresponding to one or more layers of the interface to be projected.
  • Dividing the layer data into the first part and the second part means dividing the data corresponding to each layer, so as to divide the data corresponding to each layer into data A and data B, and divide the data corresponding to all layers into A is uniformly determined as the first part, and data B corresponding to all layers is uniformly determined as the second part, thereby dividing the interface to be projected into a first area (i.e., the image area corresponding to the first part) and a second area (i.e., the second corresponding image area).
  • the first part and the second part do not overlap, that is, the first area and the second area do not overlap.
  • any division method can be used to divide the layer data, and the specific division method can be set by technicians according to the actual scenario.
  • the division method may be equal division or non-equal division.
  • Equal division means that the image size of the first region corresponding to the first part is the same as the image size of the second region corresponding to the second part.
  • Non-equal division means that the image size of the first region corresponding to the first part is different from the image size of the second region corresponding to the second part.
  • you can The image size of the first area may be greater than the image size of the second area, or the image size of the second area may be greater than the image size of the first area.
  • the following takes the dividing method as equal division as an example for illustrative explanation.
  • FIG. 5 shows a schematic diagram of the application scenario of the division method provided by the embodiment of the present application.
  • the first terminal device can be a mobile phone
  • the interface to be projected can be a photo album interface.
  • the album interface can include multiple images and related action buttons (such as photos, albums, moments, and discoveries).
  • This application scenario exemplifies the division method by dividing the layer data and obtaining the corresponding areas of the first part and the second part in the interface to be projected.
  • the SurfaceFlinger component can divide the layer data through horizontal division, that is, the data corresponding to each layer can be divided into data A corresponding to the upper side and data B corresponding to the lower side. , thereby dividing the interface to be projected into an upper first area 501 (ie, the area above the dotted line) and a lower second area 502 (ie, the area below the dotted line).
  • the SurfaceFlinger component can divide the layer data through vertical division, that is, the data corresponding to each layer can be divided into data A corresponding to the left and data B corresponding to the right. , thereby dividing the interface to be projected into a first area 501 on the left (ie, the area to the left of the dotted line) and a second area 502 on the right (ie, the area to the right of the dotted line).
  • the SurfaceFlinger component can divide the layer data by diagonal division, that is, the data corresponding to each layer can be divided into data A corresponding to the upper left side and data A corresponding to the lower right side.
  • Data B is used to divide the interface to be projected into a first area 501 on the upper left side (i.e., the area on the upper left side of the dotted line) and a second area 502 on the lower right side (i.e., the area on the lower right side of the dotted line).
  • the SurfaceFlinger component can divide the layer data by diagonal division, that is, the data corresponding to each layer can be divided into data A corresponding to the lower left side and data corresponding to the upper right side.
  • the interface to be projected is divided into a first area 501 on the lower left side (that is, the area on the lower left side of the dotted line) and a second area 502 on the upper right side (that is, the area on the upper right side of the dotted line).
  • the SurfaceFlinger component is synthesized according to the time required for the first part and the time required for synthesis according to the second part.
  • the required time may both be 4 ms
  • the time required for the encoder to encode according to the first image and the time required for encoding according to the second image may both be 5 ms
  • the first application sends the first encoded data to the second terminal device.
  • Both the time required and the time required to send the second encoded data to the second terminal device may be 5 ms.
  • the process of synthesizing the SurfaceFlinger component according to the second part and the process of the encoder encoding according to the first image can be executed in parallel.
  • the process of the encoder encoding according to the second image and the first application transmitting the data to the first image through the transmission module can be executed in parallel.
  • the process of sending the first encoded data by the second terminal device can also be executed in parallel. That is, in the embodiment of the present application, the total time T1 required from the synthesis of layer data to the completion of the sending of all encoded data is (4+ 5+5+5)ms.
  • the total time T0 required for the screen projection method shown in Figure 3 to complete this process is (8+10+10)ms. Obviously, T1 ⁇ T0.
  • the time required by the screen projection method provided by the embodiment of the present application is less than the time required by the screen projection method shown in Figure 3. For example, it can be longer than the time required by the screen projection method shown in Figure 3. The time is 9ms shorter. In other words, the screencasting method provided by the embodiment of the present application can effectively reduce the screencasting delay and improve the user's screencasting experience.
  • SurfaceFlinger component divides the layer data into two parts: the first part and the second part. This is only for illustrative explanation and should not be understood as a limitation of the embodiment of the present application. In the embodiment of the present application, SurfaceFlinger Components can also divide layer data into three or more parts.
  • the number of divisions corresponding to the layer data can be specifically set by technicians according to actual scenarios, and the embodiments of the present application do not impose any restrictions on this.
  • technicians can set the number of divisions corresponding to the layer data based on the central processing unit (Central Processing Unit, CPU) performance and/or scheduling efficiency of the first terminal device.
  • CPU Central Processing Unit
  • a first terminal device with a good CPU performance has relatively strong data processing capabilities and transmission capabilities, while a first terminal device with a poor CPU performance has relatively poor data processing capabilities and transmission capabilities. Therefore, for the first terminal device with better CPU performance, a larger number of divisions can be set; for the first terminal device with poor CPU performance, a smaller number of divisions can be set.
  • a first terminal device with good scheduling efficiency has relatively strong data processing capabilities and transmission capabilities, while a first terminal device with poor scheduling efficiency has relatively poor data processing capabilities and transmission capabilities. Therefore, for a first terminal device with relatively poor scheduling efficiency, For a good first terminal device, a larger number of divisions can be set; for a first terminal device with poor scheduling efficiency, a smaller number of divisions can be set.
  • the SurfaceFlinger component can first synthesize according to the first part to obtain the first image (for example, the image corresponding to the first area), and can combine the first part with the first part.
  • the image is sent to the encoder for encoding processing to obtain first encoded data.
  • the surfaceFlinger component can continue to synthesize based on the second part to obtain the second image (for example, the image corresponding to the second area), and continue to send the second image to the encoder for encoding processing. to obtain the second encoded data.
  • the SurfaceFlinger component can be synthesized based on the first part or the second part in any way, and the encoder can also use any encoding method to encode based on the first image or the second image, that is, the synthesis of the SurfaceFlinger component according to the embodiment of the present application.
  • the encoding method and the encoding method of the encoder can be set by technicians according to the actual scenario.
  • the format of the first image is the same as the format of the second image.
  • both the first image and the second image can be images in YUV format, or the first image
  • Both the second image and the second image can be images in RGB format, and so on. That is, the SurfaceFlinger component can synthesize an image in YUV format based on the first part and the second part respectively, or the SurfaceFlinger component can synthesize an image in RGB format based on the first part and the second part respectively.
  • both the first encoded data and the second encoded data can be video streams in H.264 format, or both the first encoded data and the second encoded data can be H.264 video streams. Can stream video in H.265 format, etc.
  • the first terminal device in order to enable the second terminal device to correctly splice the first image and the second image after decoding the first image and the second image, and avoid the interface to be projected from being displayed in the second terminal device If there is an out-of-order problem, the first terminal device can add corresponding division information when encoding according to the first image or the second image through the encoder. Therefore, after the second terminal device decodes the first image and the second image, it can accurately splice the first image and the second image according to the division information.
  • the division information is used to describe the division method corresponding to the layer data (that is, the division method corresponding to the interface to be projected) and the position information of the image (i.e., the first image or the second image, etc.) in the interface to be projected.
  • the division information may include division mode, image serial number, total number of images, etc.
  • the image serial number is used to describe the position information of each image in the interface to be projected.
  • the total number of images is used to describe how many images (or regions) the interface to be projected is divided into. For example, when the interface to be projected is divided into two images, the total number of images can be 2. For example, when the interface to be projected is divided into three images, the total number of images can be 3. For example, when the interface to be projected is not divided, the total number of images can be 1.
  • the encoder when encoding based on the first image, the encoder can add the corresponding Divide information. Therefore, the first encoded data corresponding to the first image may not only include the first image, but also include the field content "horizontal division” corresponding to the division method, the field content "1" corresponding to the image serial number, and the field content "1" corresponding to the total number of images. 3". Similarly, when encoding according to the second image, the encoder can add division information corresponding to the second image.
  • the second coded data corresponding to the second image may not only include the second image, but also include the field content "horizontal division” corresponding to the division method, the field content "2" corresponding to the image serial number, and the field content "2" corresponding to the total number of images. 3".
  • the encoder may add division information corresponding to the third image. Therefore, the third encoded data corresponding to the third image may not only include the third image, but also include the field content "horizontal division” corresponding to the division method, the field content "3" corresponding to the image serial number, and the field content "3" corresponding to the total number of images. 3".
  • the second terminal device After acquiring the first encoded data, the second terminal device can decode the first encoded data to obtain the first image and the division method (i.e., horizontal division), the image serial number corresponding to the first image (i.e., 1), and the total number of images (i.e., 3 ). Similarly, the second terminal device obtains the After the second encoded data, the second encoded data can be decoded to obtain the second image and the division method (ie, horizontal division), the image serial number corresponding to the second image (ie, 2), and the total number of images (ie, 3).
  • the division method i.e., horizontal division
  • the image serial number corresponding to the second image i.e., 1
  • the total number of images i.e. 3
  • the second terminal device can decode the third encoded data to obtain the third image and the division method (i.e., horizontal division), the image serial number corresponding to the third image (i.e., 3), and the total number of images (i.e., 3 ).
  • the second terminal device can determine based on the division method and the serial number of each image that the first image is the image on the upper side of the interface to be projected, the second image is the image in the middle of the interface to be projected, and the third image is the image below the interface to be projected. side image. Therefore, the second terminal device can splice the first image, the second image and the third image according to the positional relationship from top to bottom, and display the spliced image.
  • the division information corresponding to the division method can be sent to the second terminal device separately.
  • the first terminal device may first send the division information to the second terminal device. Therefore, when encoding according to each image, the encoder no longer needs to add division information to each image separately, so as to reduce the addition of information during the encoding process, thereby improving the encoding speed.
  • the division information may include division method, image transmission method, total number of images, etc.
  • the image sending method is used to describe the order in which the first terminal device sends each image to represent the image serial number corresponding to each image, thereby identifying the position information of each image in the interface to be projected.
  • the image sending method may be in the order from left to right, that is, the first terminal device may be sent in the order from left to right.
  • the first encoded data corresponding to the first image is first sent to the second terminal device, and then the second encoded data corresponding to the second image is sent to the second terminal device.
  • the second terminal device can obtain the first encoded data and the second encoded data in sequence.
  • the second terminal device can determine that the first image corresponding to the first encoded data is the image on the left side of the interface to be displayed, and the second image corresponding to the second encoded data is based on the previously obtained division information and the acquisition sequence of the encoded data.
  • the image on the right side of the interface to be projected. Therefore, the second terminal device can splice the first image and the second image according to the positional relationship from left to right, and display the spliced image.
  • the image sending method can also be set by default according to the dividing method.
  • the division information sent by the first terminal device to the second terminal device in advance may only include the division method and the total number of images.
  • the image sending mode may be set to a top-to-bottom order by default. That is, the first terminal device may send the encoded data corresponding to each image to the third terminal device in a top-to-bottom order by default. 2.
  • Terminal equipment For example, when dividing by vertical division, the image sending mode can be set to the order from left to right by default, that is, the first terminal device can send the encoded data corresponding to each image to the third terminal in the order from left to right by default. 2. Terminal equipment.
  • the second terminal device After the second terminal device obtains the encoded data corresponding to each image in sequence, it can determine the position information of each image in the interface to be projected according to the dividing method and the default image sending method, and can decode each decoded data according to the position information.
  • the images are spliced and the spliced images are displayed.
  • the image sending method may default to the order from left to right, that is, the first terminal device
  • the first encoded data corresponding to the first image, the second encoded data corresponding to the second image, and the third encoded data corresponding to the third image may be sent to the second terminal device in sequence from left to right. That is to say, the second terminal device can obtain the first encoded data, the second encoded data and the third encoded data in sequence.
  • the second terminal device can determine that the first image corresponding to the first encoded data is the left image of the interface to be projected.
  • the second image corresponding to the second encoded data is the image in the middle of the interface to be projected
  • the third image corresponding to the third encoded data is the image to the right side of the interface to be projected, and based on this, the first image, The second image and the third image are spliced and displayed.
  • the SurfaceFlinger component can synthesize the layer data only once, and the synthesized image can be directly used for display on the first terminal device and directly for display on the second terminal device, so as to Reduce the number of image synthesis performed by the SurfaceFlinger component and reduce the waste of resources such as CPU, memory and power consumption of the first terminal device.
  • the image synthesis cycle of the SurfaceFlinger component needs to be consistent with that of the first terminal device.
  • the refresh frequency remains the same to avoid display abnormalities on the first terminal device. That is, after the SurfaceFlinger component obtains the layer data, it generally needs to wait for the Vsync signal, that is, the layer data needs to be synthesized only when the Vsync signal is detected, so that When the SurfaceFlinger component synthesizes layer data, it needs to wait for a long time (for example, the longest waiting time can reach 16ms), which will also cause a large delay in screen projection and affect the user experience.
  • FIG. 6 shows a flow chart of a screen projection method provided by another embodiment of the present application.
  • the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
  • the first application of the first terminal device when detecting the screencasting instruction, can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and send it to the first terminal device.
  • SurfaceFlinger component sends layer data.
  • the first terminal device can determine the current display status of the first terminal device.
  • the first terminal device may instruct the SurfaceFlinger component not to wait for the Vsync signal. Therefore, after the SurfaceFlinger component obtains the layer data, it can directly perform image synthesis based on the layer data to obtain the image without waiting for the Vsync signal. Subsequently, the SurfaceFlinger component can send the image to the encoder of the first terminal device.
  • the encoder After the encoder obtains the image, it can perform encoding according to the image, obtain encoded data, and can send the encoded data to the first application. After receiving the encoded data, the first application can send the encoded data to the second terminal device through the transmission module. When the second terminal device receives the encoded data sent by the first terminal device, it can decode the encoded data and display the decoded image.
  • the first terminal device can obtain the current display status of the first terminal device in real time.
  • the display status can include screen-off status and screen-on status.
  • the display state is the screen-off state, it indicates that the first terminal device does not need to display the interface to be projected simultaneously.
  • the SurfaceFlinger component only needs to synthesize the image displayed on the second terminal device, and does not need to synthesize the image displayed on the second terminal device.
  • the SurfaceFlinger component when the display state of the first terminal device is the screen-off state, after the SurfaceFlinger component obtains the layer data, it can directly perform image synthesis based on the layer data without waiting for the Vsync signal, thereby effectively reducing the waiting time of the SurfaceFlinger component. Reduce screencasting latency.
  • the first terminal device may determine whether the first application corresponding to the interface to be cast is the target application.
  • the target applications are applications without frame rate control, such as some game applications and wallpaper applications.
  • Frame rate refers to the frequency with which images in frames appear continuously on the display interface.
  • the target application can be specifically set by technicians based on actual scenarios.
  • the first terminal device may detect the current display state of the first terminal device.
  • the current display state of the first terminal device is the screen-off state
  • the first terminal device can instruct the SurfaceFlinger component to perform image synthesis in a timely manner based on the obtained layer data without waiting for the Vsync signal.
  • the SurfaceFlinger component When the first application is the target application, because these target applications without frame rate control generally draw layers faster, if the image synthesis is not based on the Vsync signal, the SurfaceFlinger component will not have time to process the layer data, resulting in The interface shows issues such as frame skipping. At this time, the SurfaceFlinger component needs to wait for the Vsync signal to perform image synthesis on the layer data when the Vsync signal is received.
  • FIG. 7 shows a flow chart of a screen projection method provided by another embodiment of the present application.
  • the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
  • the first application of the first terminal device when detecting the screencasting instruction, can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and can send the screencasting command to the first terminal device.
  • the SurfaceFlinger component sends layer data. at the same time, The first terminal device may determine the current display status of the first terminal device.
  • the first terminal device can instruct the SurfaceFlinger component not to wait for the Vsync signal. Therefore, after the SurfaceFlinger component obtains the layer data, it can directly divide the layer data into at least the first part. and the second part, and can directly perform image synthesis based on the first part to obtain the first image without waiting for the Vsync signal, thereby reducing the waiting time of the SurfaceFlinger component and reducing the screen projection delay. Subsequently, the SurfaceFlinger component may send the first image to the encoder of the first terminal device.
  • the encoder After acquiring the first image, the encoder can perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the first application. After receiving the first encoded data, the first application may send the first encoded data to the second terminal device through the transmission module.
  • the image synthesis module can also continue to perform image synthesis according to the second part to obtain the second image, and can continue to The encoder sends the second image.
  • the encoder may also continue to encode according to the second image to obtain second encoded data, and may send the second encoded data to the first application.
  • the first application can send the second encoded data to the second terminal device through the transmission module.
  • the second terminal device When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and perform the first The image and the second image are spliced and displayed.
  • the first terminal device when the current display state of the first terminal device is the screen-off state, the first terminal device can use the synthesis process performed by the SurfaceFlinger component, the encoding process performed by the encoder, and the sending process performed by the first application. Parallel execution and instructing the SurfaceFlinger component do not need to wait for the Vsync signal to reduce the screencasting delay and improve the user's screencasting experience.
  • FIG. 8 shows a schematic flow chart of a screen projection method provided by an embodiment of the present application. As shown in Figure 8, the method may include:
  • the first terminal device detects the screen projection command.
  • the projection instruction is used to instruct the first terminal device to project the to-be-projected interface corresponding to the first application to the second terminal device for synchronous display.
  • the first application may be any application in the first terminal device.
  • the screen projection instruction may be generated by triggering by the user, or may be generated by default by the first terminal device.
  • the screen projection instruction may be generated by triggering by the user, or may be generated by default by the first terminal device.
  • the screen projection instruction may be generated by triggering by the user, or may be generated by default by the first terminal device.
  • the first application of the first terminal device sends the layer data corresponding to the interface to be projected to the SurfaceFlinger component of the first terminal device.
  • the first application of the first terminal device can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and send it to the first terminal.
  • the device's SurfaceFlinger component sends layer data.
  • the interface to be projected may refer to an interface corresponding to the first application.
  • the interface may be an interface currently being displayed on the first terminal device, or may be an interface that is about to be displayed on the first terminal device.
  • the first application can use any method to draw layers on the interface to be projected to obtain layer data, and the embodiments of the present application do not impose any restrictions on this.
  • the SurfaceFlinger component performs image synthesis based on the first part of the layer data to obtain the first image.
  • the layer data includes at least the first part and the second part.
  • layer data refers to data corresponding to one or more layers of the interface to be projected.
  • the SurfaceFlinger component can divide the layer data into at least a first part and a second part. Among them, dividing the layer data into the first part and the second part means dividing the data corresponding to each layer, so as to divide the data corresponding to each layer into data A and data B, and to divide the corresponding data of all layers into data A and data B.
  • the data A of is unified and determined as the first part, and the data B corresponding to all layers are unified.
  • first part and the second part do not overlap, that is, the first area and the second area do not overlap.
  • the SurfaceFlinger component can use any division method to divide the layer data, and the specific division method can be set by technicians according to the actual scenario.
  • the division method please refer to the specific description of the aforementioned division method, which will not be described again here.
  • the layer data can be divided with reference to the division method shown in FIG. 5 .
  • the SurfaceFlinger component can composite the first part into an image in YUV format, or it can composite the first part into an image in RGB format.
  • the encoder of the first terminal device performs encoding according to the first image to obtain the first encoded data.
  • the SurfaceFlinger component performs image synthesis based on the first part.
  • the first image can be sent to the encoder of the first terminal device, so that the encoder can encode according to the first image to obtain the first encoding. data.
  • the encoder may encode the first image into a video stream in H.264 format, or encode the first image into a video stream in H.265 format.
  • the first application sends the first encoded data to the second terminal device.
  • the encoder performs encoding according to the first image, and after obtaining the first encoded data, may send the first encoded data to the first application, so that the first application sends the first encoded data to the second terminal device.
  • the first application may send the first encoded data to the second terminal device through the transmission module.
  • the transmission module may be a wired communication module, or may be a mobile communication module, or may be a wireless communication module. That is, the first application can use wired communication methods such as USB, or mobile communication methods such as 2G/3G/4G/5G/6G, or wireless communication methods such as Bluetooth, Wi-Fi, Wi-Fi p2p, and UWB.
  • the first encoded data is sent to the second terminal device.
  • the SurfaceFlinger component performs image synthesis according to the second part to obtain the second image.
  • the process of image synthesis by the SurfaceFlinger component based on the second part is similar to the process of image synthesis by the SurfaceFlinger component based on the first part.
  • the second image may be an image in YUV format, or may be an image in RGB format.
  • the SurfaceFlinger component performs image synthesis based on the first part. After obtaining the first image, it can continue to perform image synthesis based on the second part. At the same time, the SurfaceFlinger component can send the first image to the encoder for encoding, that is, the SurfaceFlinger component can perform image synthesis based on the second part. Part of the image synthesis process and the encoding process of the encoder based on the first image can be executed in parallel to reduce the screen projection delay.
  • the encoder performs encoding according to the second image to obtain second encoded data.
  • the SurfaceFlinger component performs image synthesis according to the second part.
  • the second image can be sent to the encoder, so that the encoder can encode according to the second image and obtain the second encoded data.
  • the encoding process by the encoder based on the second image is similar to the encoding process by the encoder based on the first image. For details, reference may be made to the aforementioned encoding process based on the first image, which will not be described again here.
  • the second encoded data may be a video stream in H.264 format, or may be a video stream in H.265 format.
  • the first application sends the second encoded data to the second terminal device.
  • the encoder performs encoding according to the second image, and after obtaining the second encoded data, the second encoded data can be sent to the first application, so that the first application sends the second encoded data to the second terminal device.
  • the first application may send the second encoded data to the second terminal device through the transmission module.
  • the transmission module may be a wired communication module, or may be a mobile communication module, or may be a wireless communication module. That is, the first application can use wired communication methods such as USB, or mobile communication methods such as 2G/3G/4G/5G/6G, or wireless communication methods such as Bluetooth, Wi-Fi, Wi-Fi p2p, and UWB.
  • the second encoded data is sent to the second terminal device.
  • the SurfaceFlinger component can perform image synthesis based on the first part and the second part of the layer data respectively, the encoder can perform encoding according to the first image and the second image respectively, and the first application can perform encoding on the first encoded data and the second part.
  • the second encoded data is sent separately to achieve parallel execution of synthesis, encoding, and sending, thereby reducing the screen projection delay.
  • the SurfaceFlinger component performs the process of image synthesis according to the first part and the second part respectively
  • the encoder performs the process of encoding according to the first image and the second image respectively
  • the first application performs the process of encoding the first encoded data and the second encoded data respectively.
  • the process of sending may refer to the foregoing description and will not be repeated here.
  • the process of synthesis, encoding, and sending may be performed in a parallel manner as shown in FIG. 4 .
  • the SurfaceFlinger component when the encoder encodes according to the first image and/or when the first application sends the first encoded data, can continue to perform image synthesis according to the second part to obtain the second image. and can continue sending the second image to the encoder. After receiving the second image, the encoder can continue to encode according to the second image to obtain the second encoded data, and can send the second encoded data to the first application. After receiving the second encoded data, the first application can send the second encoded data to the second terminal device through the transmission module.
  • the process of image synthesis by the SurfaceFlinger component based on the second part and the process of encoding by the encoder based on the first image can be executed in parallel, or the process of image synthesis by the SurfaceFlinger component based on the second part and the first encoding of the first image by the first application can be executed in parallel.
  • the process of sending data can be executed in parallel, or the process of the encoder encoding according to the second image can be executed in parallel with the process of the first application sending the first encoded data, so as to effectively reduce the screen projection delay and improve the user experience. .
  • the second terminal device decodes the first encoded data and the second encoded data, obtains the first image and the second image, and displays the first image and the second image after splicing.
  • the encoder when the encoder performs encoding according to the first image or the second image, corresponding division information may be added. Therefore, after the second terminal device decodes the first image and the second image, it can accurately splice the first image and the second image according to the division information, and display the spliced image.
  • the division information is used to describe the division method corresponding to the layer data and the position information of the image (ie, the first image or the second image, etc.) in the interface to be projected.
  • the division information may include division mode, image serial number, total number of images, etc.
  • the image serial number is used to describe the position information of each image in the interface to be projected.
  • the total number of images is used to describe how many images (or regions) the interface to be projected is divided into.
  • the first terminal device can obtain the current display status of the first terminal device in real time.
  • the first terminal device may instruct the SurfaceFlinger component not to wait for the Vsync signal. Therefore, after the SurfaceFlinger component obtains the layer data, it can directly perform image synthesis based on the first part, and after completing the first part's image synthesis, it can directly perform image synthesis based on the second part, without receiving the Vsync signal before starting the image synthesis.
  • the first part performs image synthesis to reduce the waiting time of the SurfaceFlinger component and reduce the delay of screen projection.
  • the first terminal device when the current display state of the first terminal device is the screen-off state, the first terminal device can cast the screen to be projected according to the screen casting method shown in Figure 7.
  • the screen casting method shown in Figure 7. For specific content, please refer to the corresponding part of Figure 7. The embodiments will not be described again here.
  • the layer data can be divided into at least a first part and a second part, and the layer data can be divided into and the second part are synthesized, encoded and transmitted, and the synthesis, encoding, transmission and other processing according to the second part and the encoding, transmission and other processing according to the first image corresponding to the first part can be executed in parallel to achieve
  • the synthesis process performed by the image synthesis module, the encoding process performed by the encoder, and the sending process performed by the first application are executed in parallel, thereby effectively reducing the screencasting delay and improving the user's screencasting experience.
  • sequence number of each step in the above embodiment does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
  • embodiments of the present application also provide a screen projection device, and each module of the device can correspond to each step of the screen projection method.
  • Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above.
  • Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units.
  • the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application.
  • For the specific working processes of the units and modules in the above system please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.
  • An embodiment of the present application also provides a terminal device, which includes at least one memory, at least one processor, and a computer program stored in the at least one memory and executable on the at least one processor, so When the processor executes the computer program, the terminal device implements the steps in any of the above method embodiments.
  • the structure of the terminal device may be as shown in Figure 1.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program When the computer program is executed by a computer, it causes the computer to implement the steps in any of the above method embodiments. .
  • Embodiments of the present application provide a computer program product.
  • the terminal device implements the steps in any of the above method embodiments.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • this application can implement all or part of the processes in the methods of the above embodiments by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the computer program When executed by a processor, the steps of each of the above method embodiments may be implemented.
  • the computer program includes computer program code, which may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable storage medium may at least include: any entity or device capable of carrying computer program code to a device/terminal device, a recording medium, a computer memory, a read-only memory (ROM), or a random access memory. (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media.
  • a device/terminal device a recording medium
  • a computer memory a read-only memory (ROM), or a random access memory.
  • random access memory random access memory
  • RAM random access memory
  • electrical carrier signals telecommunications signals
  • software distribution media for example, U disk, mobile hard disk, magnetic disk or CD, etc.
  • computer-readable storage media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/terminal equipment and methods can be implemented in other ways.
  • the device/terminal equipment embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components can be combined or can be integrated into another system, or some features can be omitted, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present application is applicable to the technical field of terminals, and particularly relates to a screen projection method, a terminal device, and a computer-readable storage medium. In the method, layer data corresponding to an interface to be subjected to screen projection may at least comprise a first part and a second part, a first terminal device can perform processing such as image synthesis, encoding and sending respectively according to the first part and the second part, and the processing such as image synthesis, encoding and sending performed according to the second part can be concurrently executed with the processing such as encoding and sending performed according to a first image corresponding to the first part, so as to achieve concurrent execution of image synthesis, encoding and sending during screen projection, thereby effectively reducing the time delay of screen projection, and improving user experience of screen projection.

Description

投屏方法、终端设备及计算机可读存储介质Screen projection method, terminal equipment and computer-readable storage medium
本申请要求于2022年03月11日提交国家知识产权局、申请号为202210254483.2、申请名称为“投屏方法、终端设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application submitted to the State Intellectual Property Office on March 11, 2022, with the application number 202210254483.2 and the application name "Screen Projection Method, Terminal Equipment and Computer-Readable Storage Medium", and all its contents have been approved This reference is incorporated into this application.
技术领域Technical field
本申请属于终端技术领域,尤其涉及投屏方法、终端设备及计算机可读存储介质。This application belongs to the field of terminal technology, and particularly relates to screen projection methods, terminal equipment and computer-readable storage media.
背景技术Background technique
终端设备之间的投屏分享,已经成为人们日常生活中的常用功能。其中,现有投屏的主要流程是:发送端的投屏应用进行图层绘制,并在绘制完成后,通知SurfaceFlinger组件对绘制得到的图层数据进行合成。SurfaceFlinger组件在合成完成后,通知编码器对合成得到的图像进行编码。编码器在编码完成后,通知投屏应用将编码得到的视频流等编码数据发送给接收端。但现有投屏的投屏过程中,存在时延较大等问题,影响用户体验。Screen sharing between terminal devices has become a common function in people's daily lives. Among them, the main process of existing screencasting is: the screencasting application on the sending end performs layer drawing, and after the drawing is completed, notifies the SurfaceFlinger component to synthesize the drawn layer data. After the synthesis is completed, the SurfaceFlinger component notifies the encoder to encode the synthesized image. After the encoding is completed, the encoder notifies the screen casting application to send the encoded video stream and other encoded data to the receiving end. However, there are problems such as large time delays in the existing screencasting process, which affects the user experience.
发明内容Contents of the invention
本申请实施例提供了一种投屏方法、终端设备及计算机可读存储介质,可以解决现有投屏的时延较大的问题。Embodiments of the present application provide a screen projection method, a terminal device, and a computer-readable storage medium, which can solve the existing problem of large screen projection delay.
第一方面,本申请实施例提供了一种投屏***,包括第一终端设备和第二终端设备;In a first aspect, embodiments of the present application provide a screen projection system, including a first terminal device and a second terminal device;
所述第一终端设备,用于在检测到投屏指令时,对待投屏界面进行图层绘制,得到所述待投屏界面对应的图层数据;The first terminal device is configured to perform layer drawing on the interface to be projected when detecting a screen casting instruction, and obtain layer data corresponding to the interface to be projected;
所述第一终端设备,还用于根据所述图层数据的第一部分进行图像合成,得到第一图像;The first terminal device is also configured to perform image synthesis according to the first part of the layer data to obtain a first image;
所述第一终端设备,还用于根据所述第一图像进行编码,得到第一编码数据,并向所述第二终端设备发送所述第一编码数据;The first terminal device is further configured to perform encoding according to the first image, obtain first encoded data, and send the first encoded data to the second terminal device;
所述第一终端设备,还用于在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,并根据所述第二图像进行编码,得到第二编码数据;The first terminal device is also configured to perform encoding according to the second part of the layer data when encoding according to the first image, or when sending the first encoding data to the second terminal device. Image synthesis to obtain a second image, and encoding according to the second image to obtain second encoded data;
所述第一终端设备,还用于向所述第二终端设备发送所述第二编码数据;The first terminal device is also configured to send the second encoded data to the second terminal device;
所述第二终端设备,用于获取所述第一编码数据和所述第二编码数据,并对所述第一编码数据和所述第二编码数据进行解码,得到所述第一图像和所述第二图像;The second terminal device is used to obtain the first encoded data and the second encoded data, and decode the first encoded data and the second encoded data to obtain the first image and the second encoded data. Describing the second image;
所述第二终端设备,还用于根据所述第一图像和所述第二图像,得到所述待投屏界面,并对所述待投屏界面进行显示。The second terminal device is further configured to obtain the interface to be projected based on the first image and the second image, and display the interface to be projected.
通过上述的投屏***,在检测到投屏指令时,第一终端设备可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据,并根据图层数据的第一部分进行图像合成,得到第一图像。第一终端设备可以根据第一图像进行编码,得到第一编码数据,并向第二终端设备发送第一编码数据。在根据第一图像进行编码或在向第二终端设备发送第一编码数据时,第一终端设备还可以继续根据图层数据的第二部分进行图像合成,得到第二图像,根据第二图像进行编码,得到第二编码数据,并向第二终端设备发送第二编码数据。第二终端设备接收到第一终端设备发送的第一编码数据和第二编码数据时,可以对第一编码数据和第二编码数据进行解码,得到第一图像和第二图像,并根据第一图像和第二图像得到待投屏界面。即第一终端设备可以分别根据第一部分和第二部分进行合成、编码和发送等处理,而且根据第二部分进行的合成、编码、发送等处理过程与根据第一部分对应的第一图像进行的编码、发送等处理过程可以并行执行,以实现投屏时,图像合成过程、编码过程和发送过程三者的并行执行,从而可以有效降低投屏的时延,提升用 户的投屏体验。Through the above-mentioned screen projection system, when detecting a screen projection instruction, the first terminal device can draw the layer of the interface to be projected, obtain the layer data corresponding to the interface to be projected, and perform image processing based on the first part of the layer data. Synthesize to get the first image. The first terminal device may perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the second terminal device. When encoding according to the first image or sending the first encoded data to the second terminal device, the first terminal device can also continue to perform image synthesis according to the second part of the layer data to obtain the second image, and perform the processing according to the second image. Encoding, obtaining second encoded data, and sending the second encoded data to the second terminal device. When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and obtain the first image and the second image according to the first The image and the second image are used to obtain the interface to be projected onto the screen. That is, the first terminal device can perform synthesis, encoding, and transmission according to the first part and the second part respectively, and the synthesis, encoding, transmission, and other processing according to the second part are the same as the encoding according to the first image corresponding to the first part. , sending and other processing processes can be executed in parallel to achieve parallel execution of the image synthesis process, encoding process and sending process during screencasting, which can effectively reduce the screencasting delay and improve user experience. users’ screencasting experience.
示例性的,所述第一终端设备包括图像合成模块、编码器和第一应用,所述第一应用为所述待投屏界面对应的应用。Exemplarily, the first terminal device includes an image synthesis module, an encoder, and a first application, and the first application is an application corresponding to the interface to be projected.
所述第一终端设备,用于通过所述图像合成模块根据所述第一部分进行图像合成,得到所述第一图像。The first terminal device is configured to perform image synthesis according to the first part through the image synthesis module to obtain the first image.
所述第一终端设备,用于通过所述编码器根据所述第一图像进行编码,得到所述第一编码数据。The first terminal device is configured to perform encoding according to the first image through the encoder to obtain the first encoded data.
所述第一终端设备,用于通过所述第一应用向所述第二终端设备发送所述第一编码数据。The first terminal device is configured to send the first encoded data to the second terminal device through the first application.
在一种可能的实现方式中,所述图像合成模块,具体用于在所述编码器根据所述第一图像进行编码时,根据所述第二部分进行图像合成,得到所述第二图像。In a possible implementation, the image synthesis module is specifically configured to perform image synthesis based on the second part to obtain the second image when the encoder performs encoding according to the first image.
在另一种可能的实现方式中,所述图像合成模块,具体用于在所述第一应用向所述第二终端设备发送所述第一编码数据时,根据所述第二部分进行图像合成,得到所述第二图像。In another possible implementation, the image synthesis module is specifically configured to perform image synthesis according to the second part when the first application sends the first encoded data to the second terminal device. , to obtain the second image.
在另一种可能的实现方式中,所述编码器,具体用于在所述第一应用向所述第二终端设备发送所述第一编码数据时,根据所述第二图像进行编码,得到第二编码数据。In another possible implementation, the encoder is specifically configured to perform encoding according to the second image when the first application sends the first encoded data to the second terminal device, to obtain Second encoded data.
示例性的,所述图像合成模块,还用于响应于所述图层数据,根据所述图层数据的第一部分进行图像合成,得到所述第一图像。Exemplarily, the image synthesis module is further configured to perform image synthesis according to the first part of the layer data in response to the layer data to obtain the first image.
在一种可能的实现方式中,所述第一终端设备,还用于确定所述第一图像对应的第一划分信息,并根据所述第一图像和所述第一划分信息进行编码,得到所述第一编码数据,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数。In a possible implementation, the first terminal device is further configured to determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain The first encoded data and the first division information include the division method corresponding to the layer data, the image serial number corresponding to the first image, and the total number of images corresponding to the layer data.
在另一种可能的实现方式中,所述第一终端设备,还用于确定所述图层数据对应的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;向所述第二终端设备发送所述第三划分信息。In another possible implementation, the first terminal device is also used to determine third division information corresponding to the layer data, where the third division information includes a division method corresponding to the layer data, The total number of images and the image sending method; sending the third division information to the second terminal device.
示例性的,所述第二终端设备包括解码器。所述解码器,具体用于对所述第一编码数据进行解码,得到所述第一图像和所述第一图像对应的第一划分信息,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数;对所述第二编码数据进行解码,得到所述第二图像和所述第二图像对应的第二划分信息,所述第二划分信息包括所述图层数据对应的划分方式、所述第二图像对应的图像序号和所述图层数据对应的图像总数。Exemplarily, the second terminal device includes a decoder. The decoder is specifically configured to decode the first encoded data to obtain the first image and first division information corresponding to the first image, where the first division information includes the corresponding layer data. The dividing method, the image serial number corresponding to the first image and the total number of images corresponding to the layer data; decoding the second encoded data to obtain the second image and the second image corresponding to the second image. Division information, the second division information includes the division method corresponding to the layer data, the image serial number corresponding to the second image, and the total number of images corresponding to the layer data.
在一个示例中,所述第二终端设备,具体用于根据所述第一划分信息和所述第二划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。In one example, the second terminal device is specifically configured to splice the first image and the second image according to the first division information and the second division information to obtain the screen to be projected. interface.
在另一个示例中,所述第二终端设备,还用于获取所述第一终端设备发送的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;根据所述第三划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。In another example, the second terminal device is further configured to obtain the third division information sent by the first terminal device. The third division information includes the division method corresponding to the layer data, the total number of images, and Image sending method: splicing the first image and the second image according to the third division information to obtain the interface to be projected.
第二方面,本申请实施例提供了一种投屏方法,应用于第一终端设备,所述方法可以包括:In a second aspect, embodiments of the present application provide a screen projection method, applied to a first terminal device. The method may include:
在检测到投屏指令时,对待投屏界面进行图层绘制,得到所述待投屏界面对应的图层数据;When a screen projection instruction is detected, layer drawing is performed on the interface to be projected to obtain the layer data corresponding to the interface to be projected;
根据所述图层数据的第一部分进行图像合成,得到第一图像;Perform image synthesis according to the first part of the layer data to obtain a first image;
根据所述第一图像进行编码,得到第一编码数据;Perform encoding according to the first image to obtain first encoded data;
向第二终端设备发送所述第一编码数据;Send the first encoded data to the second terminal device;
在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,并根据所述第二图像进行编码,得到第二编码数据;When encoding according to the first image, or when sending the first encoded data to the second terminal device, image synthesis is performed according to the second part of the layer data to obtain a second image, and the second image is obtained according to the second part of the layer data. The second image is encoded to obtain second encoded data;
向所述第二终端设备发送所述第二编码数据。Send the second encoded data to the second terminal device.
在上述的投屏方法中,待投屏界面对应的图层数据至少可以包括第一部分和第二部分,第一终端设备 可以分别根据第一部分和第二部分进行图像合成、编码和发送等处理,而且根据第二部分进行的图像合成、编码、发送等处理过程与根据第一部分对应的第一图像进行的编码、发送等处理过程可以并行执行,以实现投屏时,图像合成、编码和发送三者的并行执行,从而有效降低投屏的时延,提升用户的投屏体验。In the above screen projection method, the layer data corresponding to the interface to be projected can at least include a first part and a second part. The first terminal device Image synthesis, encoding, and transmission can be performed according to the first part and the second part respectively, and the image synthesis, encoding, and transmission according to the second part are the same as the encoding, transmission, etc. according to the first image corresponding to the first part. The processing process can be executed in parallel to achieve parallel execution of image synthesis, encoding and sending during screencasting, thereby effectively reducing the screencasting delay and improving the user's screencasting experience.
示例性的,所述根据所述图层数据的第一部分进行图像合成,得到第一图像,可以包括:Exemplarily, the image synthesis based on the first part of the layer data to obtain the first image may include:
通过图像合成模块根据所述第一部分进行图像合成,得到所述第一图像;The image synthesis module performs image synthesis according to the first part to obtain the first image;
所述根据所述第一图像进行编码,得到第一编码数据,包括:The encoding according to the first image to obtain the first encoded data includes:
通过编码器根据所述第一图像进行编码,得到所述第一编码数据;The encoder performs encoding according to the first image to obtain the first encoded data;
所述向第二终端设备发送所述第一编码数据,包括:The sending of the first encoded data to the second terminal device includes:
通过第一应用向所述第二终端设备发送所述第一编码数据。The first encoded data is sent to the second terminal device through the first application.
在一种可能的实现方式中,所述在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,包括:In a possible implementation, when encoding according to the first image, or when sending the first encoded data to the second terminal device, according to the second part of the layer data Perform image synthesis to obtain a second image, including:
在所述编码器根据所述第一图像进行编码时,所述图像合成模块根据所述第二部分进行图像合成,得到所述第二图像。When the encoder performs encoding according to the first image, the image synthesis module performs image synthesis according to the second part to obtain the second image.
在该实现方式提供的投屏方法中,第一终端设备可以通过图像合成模块进行图像合成,通过编码器根据合成的第一图像或第二图像进行编码,并通过第一应用将编码得到的第一编码数据或第二编码数据发送给第二终端设备。因此,在编码器根据第一图像进行编码时,图像合成模块可以继续根据图层数据的第二部分进行图像合成,使得编码器根据第一图像进行编码的过程与图像合成模块根据第二部分进行图像合成的过程,可以同时进行,以降低第一终端设备的投屏时延,提升用户体验。In the screen projection method provided by this implementation, the first terminal device can perform image synthesis through the image synthesis module, encode according to the synthesized first image or second image through the encoder, and use the first application to obtain the encoded third image. The first encoded data or the second encoded data is sent to the second terminal device. Therefore, when the encoder encodes according to the first image, the image synthesis module can continue to perform image synthesis according to the second part of the layer data, so that the encoder encodes according to the first image and the image synthesis module performs encoding according to the second part. The process of image synthesis can be carried out simultaneously to reduce the screen projection delay of the first terminal device and improve the user experience.
在另一种可能的实现方式中,所述在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,可以包括:In another possible implementation, when encoding according to the first image, or when sending the first encoded data to the second terminal device, according to the second encoding of the layer data Partially perform image synthesis to obtain a second image, which may include:
在所述第一应用向所述第二终端设备发送所述第一编码数据时,所述图像合成模块根据所述第二部分进行图像合成,得到所述第二图像。When the first application sends the first encoded data to the second terminal device, the image synthesis module performs image synthesis according to the second part to obtain the second image.
在该实现方式提供的投屏方法中,在第一应用向第二终端设备发送第一编码数据时,图像合成模块可以继续根据图层数据的第二部分进行图像合成,使得第一应用发送第一编码数据的过程与图像合成模块根据第二部分进行图像合成的过程,可以同时进行,以降低第一终端设备的投屏时延,提升用户体验。In the screen projection method provided by this implementation, when the first application sends the first encoded data to the second terminal device, the image synthesis module can continue to perform image synthesis based on the second part of the layer data, so that the first application sends the first encoded data to the second terminal device. The process of encoding data and the process of image synthesis performed by the image synthesis module according to the second part can be performed simultaneously to reduce the screen projection delay of the first terminal device and improve the user experience.
在另一种可能的实现方式中,所述根据所述第二图像进行编码,得到第二编码数据,可以包括:In another possible implementation, encoding according to the second image to obtain second encoded data may include:
在所述第一应用向所述第二终端设备发送所述第一编码数据时,所述编码器根据所述第二图像进行编码,得到第二编码数据。When the first application sends the first encoded data to the second terminal device, the encoder performs encoding according to the second image to obtain second encoded data.
在该实现方式提供的投屏方法中,在第一应用向第二终端设备发送第一编码数据时,编码器可以继续根据第二图像进行编码,使得第一应用发送第一编码数据的过程与编码器根据第二图像进行编码的过程,可以同时进行,以降低第一终端设备的投屏时延。In the screen projection method provided by this implementation, when the first application sends the first encoded data to the second terminal device, the encoder can continue to encode according to the second image, so that the process of the first application sending the first encoded data is the same as The encoding process by the encoder based on the second image can be performed at the same time to reduce the screen projection delay of the first terminal device.
在一个示例中,所述根据所述图层数据的第一部分进行图像合成,得到第一图像,可以包括:In one example, performing image synthesis based on the first part of the layer data to obtain the first image may include:
响应于所述图层数据,通过图像合成模块根据所述图层数据的第一部分进行图像合成,得到所述第一图像。In response to the layer data, the image synthesis module performs image synthesis according to the first part of the layer data to obtain the first image.
在该实现方式提供的投屏方法中,在投屏过程时,第一终端设备可以实时获取第一终端设备当前的显示状态。当显示状态为熄屏状态时,表明第一终端设备不需要对待投屏界面进行同步显示,因此,图像合成模块获取图层数据后,可以直接根据图层数据的第一部分进行图像合成,而不需要等待Vsync信号,以有效减少图像合成模块的等待时间,降低投屏的时延。In the screen casting method provided by this implementation, during the screen casting process, the first terminal device can obtain the current display status of the first terminal device in real time. When the display state is the screen-off state, it indicates that the first terminal device does not need to display the interface to be projected simultaneously. Therefore, after the image synthesis module obtains the layer data, it can directly perform image synthesis based on the first part of the layer data without It is necessary to wait for the Vsync signal to effectively reduce the waiting time of the image synthesis module and reduce the delay of screen projection.
在一种可能的实现方式中,所述根据所述第一图像进行编码,得到第一编码数据,可以包括: In a possible implementation, encoding according to the first image to obtain the first encoded data may include:
确定所述第一图像对应的第一划分信息,并根据所述第一图像和所述第一划分信息进行编码,得到所述第一编码数据,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数。Determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain the first encoded data, where the first division information includes the layer data The corresponding division method, the image serial number corresponding to the first image and the total number of images corresponding to the layer data.
在该实现方式提供的投屏方法中,为使得第二终端设备在解码得到第一图像和第二图像后,可以正确得到待投屏界面,避免待投屏界面在第二终端设备中显示存在乱序问题,第一终端设备在通过编码器根据第一图像或第二图像进行编码时,可以增加对应的划分信息。因此,第二终端设备解码得到第一图像和第二图像时,可以得到第一图像对应的第一划分信息和第二图像对应的第二划分信息,从而可以根据第一划分信息和第二划分信息准确得到待投屏界面。In the screen projection method provided by this implementation, in order to enable the second terminal device to correctly obtain the interface to be projected after decoding the first image and the second image, and to prevent the interface to be projected from being displayed on the second terminal device To solve the out-of-order problem, the first terminal device can add corresponding division information when encoding according to the first image or the second image through the encoder. Therefore, when the second terminal device decodes the first image and the second image, it can obtain the first division information corresponding to the first image and the second division information corresponding to the second image, so that it can obtain the first division information and the second division information according to the first division information and the second division information. The information is accurately obtained and the interface to be projected onto the screen is obtained.
在另一种可能的实现方式中,所述方法还可以包括:In another possible implementation, the method may also include:
确定所述图层数据对应的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;Determine third division information corresponding to the layer data, where the third division information includes the division method, the total number of images, and the image transmission method corresponding to the layer data;
向所述第二终端设备发送所述第三划分信息。Send the third division information to the second terminal device.
在该实现方式提供的投屏方法中,第一终端设备确定划分方式后,可以将划分方式对应的划分信息单独发送给第二终端设备。例如,在进行投屏之前,第一终端设备可以先将划分信息发送给第二终端设备。因此,在对各图像进行编码时,编码器不需要再单独为各图像添加划分信息,以减少编码过程中的信息添加,从而提高编码速度。In the screen projection method provided by this implementation, after the first terminal device determines the division method, it can separately send the division information corresponding to the division method to the second terminal device. For example, before performing screencasting, the first terminal device may first send the division information to the second terminal device. Therefore, when encoding each image, the encoder no longer needs to add division information to each image separately, so as to reduce the addition of information during the encoding process, thereby improving the encoding speed.
第三方面,本申请实施例提供了一种投屏方法,应用于第二终端设备,所述方法可以包括:In a third aspect, embodiments of the present application provide a screen projection method, applied to a second terminal device. The method may include:
获取第一终端设备分别发送的第一编码数据和第二编码数据,所述第一编码数据为图层数据的第一部分对应的编码数据,所述第二编码数据为所述图层数据的第二部分对应的编码数据,所述图层数据为所述第一终端设备的待投屏界面对应的图层数据;Obtain the first encoded data and the second encoded data respectively sent by the first terminal device. The first encoded data is the encoded data corresponding to the first part of the layer data, and the second encoded data is the third part of the layer data. The two parts correspond to the encoded data, and the layer data is the layer data corresponding to the interface to be projected on the first terminal device;
分别对所述第一编码数据和所述第二编码数据进行解码,得到第一图像和第二图像;Decoding the first encoded data and the second encoded data respectively to obtain a first image and a second image;
根据所述第一图像和所述第二图像,得到所述待投屏界面,并对所述待投屏界面进行显示。According to the first image and the second image, the interface to be projected is obtained, and the interface to be projected is displayed.
示例性的,所述分别对所述第一编码数据和所述第二编码数据进行解码,得到第一图像和第二图像,可以包括:Exemplarily, decoding the first encoded data and the second encoded data respectively to obtain the first image and the second image may include:
对所述第一编码数据进行解码,得到所述第一图像和所述第一图像对应的第一划分信息,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数;The first encoded data is decoded to obtain the first image and first division information corresponding to the first image. The first division information includes a division method corresponding to the layer data, the first division information corresponding to the first image. The image serial number corresponding to the image and the total number of images corresponding to the layer data;
对所述第二编码数据进行解码,得到所述第二图像和所述第二图像对应的第二划分信息,所述第二划分信息包括所述图层数据对应的划分方式、所述第二图像对应的图像序号和所述图层数据对应的图像总数。The second encoded data is decoded to obtain the second image and second division information corresponding to the second image. The second division information includes a division method corresponding to the layer data, the second division information corresponding to the second image, and the second division information corresponding to the second image. The image serial number corresponding to the image and the total number of images corresponding to the layer data.
在一个示例中,所述根据所述第一图像和所述第二图像,得到所述待投屏界面,可以包括:In one example, obtaining the interface to be projected based on the first image and the second image may include:
根据所述第一划分信息和所述第二划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。The first image and the second image are spliced according to the first division information and the second division information to obtain the interface to be projected.
在另一个示例中,所述方法还可以包括:In another example, the method may further include:
获取所述第一终端设备发送的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;Obtain third division information sent by the first terminal device, where the third division information includes the division method corresponding to the layer data, the total number of images, and the image transmission method;
所述根据所述第一图像和所述第二图像,得到所述待投屏界面,包括:Obtaining the interface to be projected based on the first image and the second image includes:
根据所述第三划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。The first image and the second image are spliced according to the third division information to obtain the interface to be projected.
第四方面,本申请实施例提供了一种投屏装置,应用于第一终端设备,所述装置可以包括:In a fourth aspect, embodiments of the present application provide a screen projection device applied to a first terminal device. The device may include:
图层绘制模块,用于在检测到投屏指令时,对待投屏界面进行图层绘制,得到所述待投屏界面对应的图层数据; The layer drawing module is used to draw the layer of the interface to be projected when a screen projection instruction is detected, and obtain the layer data corresponding to the interface to be projected;
图像合成模块,用于根据所述图层数据的第一部分进行图像合成,得到第一图像;An image synthesis module, configured to perform image synthesis according to the first part of the layer data to obtain the first image;
编码模块,用于根据所述第一图像进行编码,得到第一编码数据;An encoding module, configured to encode according to the first image to obtain first encoded data;
发送模块,用于向第二终端设备发送所述第一编码数据;A sending module, configured to send the first encoded data to the second terminal device;
图像合成模块,还用于在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像;An image synthesis module, also configured to perform image synthesis based on the second part of the layer data when encoding according to the first image or when sending the first encoded data to the second terminal device, Get the second image;
编码模块,还用于根据所述第二图像进行编码,得到第二编码数据;An encoding module, also configured to perform encoding according to the second image to obtain second encoded data;
发送模块,还用于向所述第二终端设备发送所述第二编码数据。The sending module is also configured to send the second encoded data to the second terminal device.
在一种可能的实现方式中,所述图像合成模块,具体用于在所述编码模块根据所述第一图像进行编码时,根据所述第二部分进行图像合成,得到所述第二图像。In a possible implementation, the image synthesis module is specifically configured to perform image synthesis based on the second part to obtain the second image when the encoding module encodes according to the first image.
在另一种可能的实现方式中,所述图像合成模块,具体用于在所述发送模块向所述第二终端设备发送所述第一编码数据时,根据所述第二部分进行图像合成,得到所述第二图像。In another possible implementation, the image synthesis module is specifically configured to perform image synthesis according to the second part when the sending module sends the first encoded data to the second terminal device, The second image is obtained.
在另一种可能的实现方式中,所述编码模块,具体用于在所述发送模块向所述第二终端设备发送所述第一编码数据时,根据所述第二图像进行编码,得到第二编码数据。In another possible implementation, the encoding module is specifically configured to perform encoding according to the second image when the sending module sends the first encoded data to the second terminal device to obtain the first 2. Coded data.
在一个示例中,所述图像合成模块,还用于响应于所述图层数据,根据所述图层数据的第一部分进行图像合成,得到所述第一图像。In one example, the image synthesis module is further configured to perform image synthesis according to the first part of the layer data in response to the layer data to obtain the first image.
在一种可能的实现方式中,所述编码模块,还用于确定所述第一图像对应的第一划分信息,并根据所述第一图像和所述第一划分信息进行编码,得到所述第一编码数据,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数。In a possible implementation, the encoding module is also used to determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain the First encoded data, the first division information includes the division method corresponding to the layer data, the image serial number corresponding to the first image, and the total number of images corresponding to the layer data.
在另一种可能的实现方式中,所述装置还可以包括:In another possible implementation, the device may further include:
划分信息确定模块,用于确定所述图层数据对应的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;A division information determination module, configured to determine the third division information corresponding to the layer data, where the third division information includes the division method, the total number of images, and the image transmission method corresponding to the layer data;
划分信息发送模块,用于向所述第二终端设备发送所述第三划分信息。A division information sending module, configured to send the third division information to the second terminal device.
第五方面,本申请实施例提供了一种投屏装置,应用于第二终端设备,所述装置可以包括:In a fifth aspect, embodiments of the present application provide a screen projection device applied to a second terminal device. The device may include:
编码数据获取模块,用于获取第一终端设备分别发送的第一编码数据和第二编码数据,所述第一编码数据为图层数据的第一部分对应的编码数据,所述第二编码数据为所述图层数据的第二部分对应的编码数据,所述图层数据为所述第一终端设备的待投屏界面对应的图层数据;The coded data acquisition module is used to obtain the first coded data and the second coded data respectively sent by the first terminal device. The first coded data is the coded data corresponding to the first part of the layer data, and the second coded data is The encoded data corresponding to the second part of the layer data, where the layer data is the layer data corresponding to the interface to be projected on the first terminal device;
解码模块,用于分别对所述第一编码数据和所述第二编码数据进行解码,得到第一图像和第二图像;A decoding module, configured to decode the first encoded data and the second encoded data respectively to obtain a first image and a second image;
界面显示模块,用于根据所述第一图像和所述第二图像,得到所述待投屏界面,并对所述待投屏界面进行显示。An interface display module is configured to obtain the interface to be projected based on the first image and the second image, and display the interface to be projected.
示例性的,所述解码模块,具体用于对所述第一编码数据进行解码,得到所述第一图像和所述第一图像对应的第一划分信息,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数;Exemplarily, the decoding module is specifically configured to decode the first encoded data to obtain the first image and the first division information corresponding to the first image, where the first division information includes the The division method corresponding to the layer data, the image serial number corresponding to the first image and the total number of images corresponding to the layer data;
对所述第二编码数据进行解码,得到所述第二图像和所述第二图像对应的第二划分信息,所述第二划分信息包括所述图层数据对应的划分方式、所述第二图像对应的图像序号和所述图层数据对应的图像总数。The second encoded data is decoded to obtain the second image and second division information corresponding to the second image. The second division information includes a division method corresponding to the layer data, the second division information corresponding to the second image, and the second division information corresponding to the second image. The image serial number corresponding to the image and the total number of images corresponding to the layer data.
在一个示例中,所述界面显示模块,具体用于根据所述第一划分信息和所述第二划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。In one example, the interface display module is specifically configured to splice the first image and the second image according to the first division information and the second division information to obtain the interface to be projected. .
在另一个示例中,所述装置还可以包括:In another example, the device may further include:
划分信息获取模块,用于获取所述第一终端设备发送的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式; A division information acquisition module, configured to obtain third division information sent by the first terminal device, where the third division information includes the division method corresponding to the layer data, the total number of images, and the image transmission method;
所述界面显示模块,还用于根据所述第三划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。The interface display module is also configured to splice the first image and the second image according to the third division information to obtain the interface to be projected.
第六方面,本申请实施例提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,使所述终端设备实现上述第二方面中任一项所述的投屏方法,或者实现上述第三方面中任一项所述的投屏方法。In a sixth aspect, embodiments of the present application provide a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor. When the processor executes the computer program , causing the terminal device to implement the screen projection method described in any one of the above second aspects, or to implement the screen projection method described in any one of the above third aspects.
第七方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被计算机执行时,使所述计算机实现上述第二方面中任一项所述的投屏方法,或者实现上述第三方面中任一项的投屏方法。In a seventh aspect, embodiments of the present application provide a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is executed by a computer, it causes the computer to implement any of the above-mentioned aspects of the second aspect. The screen projection method described in one item, or the screen projection method that implements any one of the above third aspects.
第八方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第二方面中任一项所述的投屏方法,或者执行上述第三方面中任一项所述的投屏方法。In an eighth aspect, embodiments of the present application provide a computer program product. When the computer program product is run on a terminal device, the terminal device causes the terminal device to execute the screen projection method described in any one of the above second aspects, or execute the above third aspect. The screen projection method described in any of the three aspects.
可以理解的是,上述第三方面至第八方面的有益效果可以参见上述第一方面和第二方面中的相关描述,在此不再赘述。It can be understood that the beneficial effects of the above third to eighth aspects can be referred to the relevant descriptions in the above first and second aspects, and will not be described again here.
附图说明Description of the drawings
图1是本申请一实施例提供的投屏方法所适用于的终端设备的结构示意图;Figure 1 is a schematic structural diagram of a terminal device to which the screen projection method provided by an embodiment of the present application is applicable;
图2是本申请一实施例提供的投屏方法所适用于的软件架构示意图;Figure 2 is a schematic diagram of the software architecture applicable to the screen projection method provided by an embodiment of the present application;
图3是一种投屏方法的流程框图;Figure 3 is a flow chart of a screen projection method;
图4是本申请一实施例提供的投屏方法的流程框图;Figure 4 is a flow chart of a screen projection method provided by an embodiment of the present application;
图5是本申请实施例提供的划分方式的应用场景示意图;Figure 5 is a schematic diagram of the application scenario of the division method provided by the embodiment of the present application;
图6是本申请另一实施例提供的投屏方法的流程框图;Figure 6 is a flow chart of a screen projection method provided by another embodiment of the present application;
图7是本申请另一实施例提供的投屏方法的流程框图;Figure 7 is a flow chart of a screen projection method provided by another embodiment of the present application;
图8是本申请一实施例提供的一种投屏方法的流程示意图。FIG. 8 is a schematic flowchart of a screen projection method provided by an embodiment of the present application.
具体实施方式Detailed ways
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It will be understood that, when used in this specification and the appended claims, the term "comprising" indicates the presence of the described features, integers, steps, operations, elements and/or components but does not exclude one or more other The presence or addition of features, integers, steps, operations, elements, components and/or collections thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It will also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of this application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。Reference in this specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Therefore, the phrases "in one embodiment", "in some embodiments", "in other embodiments", "in other embodiments", etc. appearing in different places in this specification are not necessarily References are made to the same embodiment, but rather to "one or more but not all embodiments" unless specifically stated otherwise. The terms “including,” “includes,” “having,” and variations thereof all mean “including but not limited to,” unless otherwise specifically emphasized.
此外,本申请实施例中提到的“多个”应当被解释为两个或两个以上。In addition, the "plurality" mentioned in the embodiments of this application should be interpreted as two or more.
本申请实施例中提供的投屏方法中所涉及到的步骤仅仅作为示例,并非所有的步骤均是必须执行的步骤,或者并非各个信息或消息中的内容均是必选的,在使用过程中可以根据需要酌情增加或减少。本申请实施例中同一个步骤或者具有相同功能的步骤或者消息在不同实施例之间可以互相参 考借鉴。The steps involved in the screencasting method provided in the embodiments of this application are only examples. Not all steps are steps that must be performed, or not all information or content in the message is required. During use, It can be increased or decreased as needed. The same step or steps or messages with the same function in the embodiments of the present application can refer to each other between different embodiments. Learn from it.
本申请实施例描述的业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。The business scenarios described in the embodiments of this application are to more clearly explain the technical solutions of the embodiments of this application and do not constitute a limitation on the technical solutions provided by the embodiments of this application. Those of ordinary skill in the art will know that with the evolution of network architecture, and the emergence of new business scenarios, the technical solutions provided by the embodiments of this application are equally applicable to similar technical problems.
终端设备之间的投屏分享,已经成为人们日常生活中的常用功能。例如,可以通过无线投屏将手机或平板电脑等小屏终端设备显示的内容或媒体文件(如图库、音乐、视频等)投射到电视机、智慧屏等大屏上进行显示,以提高观看效果。例如,可以通过多屏协同将手机与笔记本电脑或平板电脑连接,或者可以通过多屏协同将平板电脑与笔记本电脑连接,并可以在连接后,将手机中的内容投屏至笔记本电脑或平板电脑,或者将平板电脑中的内容投屏至笔记本电脑进行同步显示,实现跨设备的资源共享及协同操作。Screen sharing between terminal devices has become a common function in people's daily lives. For example, content or media files (such as libraries, music, videos, etc.) displayed on small-screen terminal devices such as mobile phones or tablets can be projected onto large screens such as TVs and smart screens through wireless projection to improve the viewing effect. . For example, you can connect your phone to a laptop or tablet through multi-screen collaboration, or you can connect a tablet to a laptop through multi-screen collaboration, and after the connection, you can cast the content on your phone to your laptop or tablet. , or cast the content from the tablet to the laptop for simultaneous display, realizing cross-device resource sharing and collaborative operations.
其中,投屏的一般流程是:发送端的投屏应用进行图层绘制,并在绘制完成后,通知SurfaceFlinger组件对绘制得到的图层数据进行合成。SurfaceFlinger组件在合成完成后,通知编码器对合成得到的图像进行编码。编码器在编码完成后,通知投屏应用将编码得到的视频流等编码数据发送给接收端。接收端接收到编码数据时,可以对编码数据进行解码显示。但一般投屏中,图层数据的合成过程、图像的编码过程以及编码数据的发送过程均需耗费较长时间,且这三者之间是顺序执行的,导致投屏的时延较大,影响用户体验。Among them, the general process of screencasting is: the screencasting application on the sending end performs layer drawing, and after the drawing is completed, notifies the SurfaceFlinger component to synthesize the drawn layer data. After the synthesis is completed, the SurfaceFlinger component notifies the encoder to encode the synthesized image. After the encoding is completed, the encoder notifies the screen casting application to send the encoded video stream and other encoded data to the receiving end. When the receiving end receives the encoded data, it can decode and display the encoded data. However, in general screencasting, the synthesis process of layer data, the encoding process of images, and the sending process of encoded data all take a long time, and these three are executed sequentially, resulting in a large delay in screencasting. Affect user experience.
为了解决上述问题,本申请实施例提供了一种投屏方法、终端设备及计算机可读存储介质,该方法中,在检测到投屏指令时,第一终端设备的第一应用可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据,并可以向第一终端设备的图像合成模块发送图层数据。图像合成模块可以将图层数据划分为至少两个部分,该至少两个部分可以包括第一部分和第二部分。随后,图像合成模块可以根据第一部分进行图像合成,得到第一图像,并向第一终端设备的编码器发送第一图像。编码器可以根据第一图像进行编码,得到第一编码数据,并向第一应用发送第一编码数据。第一应用接收到第一编码数据后,可以通过传输模块向第二终端设备发送第一编码数据。In order to solve the above problems, embodiments of the present application provide a screen casting method, a terminal device, and a computer-readable storage medium. In this method, when a screen casting instruction is detected, the first application of the first terminal device can perform screen casting. The interface performs layer drawing to obtain layer data corresponding to the interface to be projected, and can send the layer data to the image synthesis module of the first terminal device. The image synthesis module may divide the layer data into at least two parts, and the at least two parts may include a first part and a second part. Subsequently, the image synthesis module may perform image synthesis according to the first part, obtain the first image, and send the first image to the encoder of the first terminal device. The encoder may perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the first application. After receiving the first encoded data, the first application may send the first encoded data to the second terminal device through the transmission module.
其中,在编码器根据第一图像进行编码或在第一应用对第一编码数据进行发送时,图像合成模块还可以继续根据第二部分进行图像合成,得到第二图像,并可以继续向编码器发送第二图像。编码器也可以继续根据第二图像进行编码,得到第二编码数据,并向第一应用发送第二编码数据。第一应用接收到第二编码数据后,可以通过传输模块向第二终端设备发送第二编码数据。Wherein, when the encoder encodes according to the first image or when the first application sends the first encoded data, the image synthesis module can continue to perform image synthesis according to the second part to obtain the second image, and can continue to provide the information to the encoder. Send second image. The encoder may also continue to encode according to the second image, obtain the second encoded data, and send the second encoded data to the first application. After receiving the second encoded data, the first application can send the second encoded data to the second terminal device through the transmission module.
第二终端设备接收到第一终端设备发送的第一编码数据和第二编码数据时,可以对第一编码数据和第二编码数据进行解码,得到第一图像和第二图像,并对第一图像和第二图像进行拼接后显示。When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and perform the first The image and the second image are spliced and displayed.
即本申请实施例中,可以将图层数据至少划分为第一部分和第二部分,并可以分别根据第一部分和第二部分进行图像合成、编码和发送等处理,而且根据第二部分进行的图像合成、编码、发送等处理过程与根据第一部分对应的第一图像进行的编码、发送等处理过程可以并行执行,以实现投屏时,图像合成模块执行的合成过程、编码器执行的编码过程和第一应用执行的发送过程三者的并行执行,从而可以有效降低投屏的时延,提升用户的投屏体验,具有较强的易用性和实用性。That is to say, in the embodiment of the present application, the layer data can be divided into at least a first part and a second part, and image synthesis, encoding, and transmission can be performed according to the first part and the second part respectively, and the image processing performed according to the second part Processing processes such as synthesis, encoding, and sending can be executed in parallel with processing processes such as encoding and sending based on the first image corresponding to the first part, so as to realize the synthesis process performed by the image synthesis module, the encoding process performed by the encoder, and The sending process of the first application execution is executed in parallel, which can effectively reduce the screen casting delay, improve the user's screen casting experience, and have strong ease of use and practicality.
本申请实施例中,第一终端设备和第二终端设备均可以为手机、平板电脑、可穿戴设备、车载设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、桌上型计算机等具有显示屏的终端设备,本申请实施例对终端设备的具体类型不作任何限制。In the embodiment of the present application, both the first terminal device and the second terminal device may be a mobile phone, a tablet computer, a wearable device, a vehicle-mounted device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, or a personal computer. Digital assistants (personal digital assistants, PDAs), desktop computers and other terminal devices with display screens. The embodiments of this application do not place any restrictions on the specific types of terminal devices.
以下首先介绍本申请实施例涉及的终端设备。请参阅图1,图1示出了终端设备100的一种结构 示意图。The following first introduces the terminal equipment involved in the embodiment of the present application. Please refer to FIG. 1 , which shows a structure of a terminal device 100 Schematic diagram.
终端设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,天线1,天线2,移动通信模块140,无线通信模块150,传感器模块160,按键170,以及显示屏180等。其中,传感器模块160可以包括压力传感器160A,陀螺仪传感器160B,磁传感器160C,加速度传感器160D,触摸传感器160E等。The terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, an antenna 1, an antenna 2, a mobile communication module 140, a wireless communication module 150, and a sensor module. 160, buttons 170, and display 180, etc. Among them, the sensor module 160 may include a pressure sensor 160A, a gyroscope sensor 160B, a magnetic sensor 160C, an acceleration sensor 160D, a touch sensor 160E, etc.
可以理解的是,本申请实施例示意的结构并不构成对终端设备100的具体限定。在本申请另一些实施例中,终端设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, the terminal device 100 may include more or less components than shown in the figures, or combine some components, or split some components, or arrange different components. The components illustrated may be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Among them, different processing units can be independent devices or integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口耦合触摸传感器160E等。例如:处理器110可以通过I2C接口耦合触摸传感器160E,使处理器110与触摸传感器160E通过I2C总线接口通信,实现终端设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. Processor 110 may couple touch sensor 160E, etc. through a different I2C bus interface. For example, the processor 110 can be coupled to the touch sensor 160E through an I2C interface, so that the processor 110 and the touch sensor 160E communicate through the I2C bus interface to implement the touch function of the terminal device 100 .
MIPI接口可以被用于连接处理器110与显示屏180等***器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和显示屏180通过DSI接口通信,实现终端设备100的显示功能。The MIPI interface can be used to connect the processor 110 and peripheral devices such as the display screen 180 . MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc. In some embodiments, the processor 110 and the display screen 180 communicate through a DSI interface to implement the display function of the terminal device 100 .
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于终端设备100与***设备之间传输数据。The USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc. The USB interface 130 can be used to transmit data between the terminal device 100 and peripheral devices.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端设备100的结构限定。在本申请另一些实施例中,终端设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation on the terminal device 100 . In other embodiments of the present application, the terminal device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
终端设备100的无线通信功能可以通过天线1,天线2,移动通信模块140,无线通信模块150,调制解调处理器以及基带处理器等实现。 The wireless communication function of the terminal device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 140, the wireless communication module 150, the modem processor and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。终端设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in terminal device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
移动通信模块140可以提供应用在终端设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块140可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块140可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块140还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块140的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块140的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 140 can provide wireless communication solutions including 2G/3G/4G/5G applied on the terminal device 100. The mobile communication module 140 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc. The mobile communication module 140 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation. The mobile communication module 140 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least part of the functional modules of the mobile communication module 140 may be disposed in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 140 and at least part of the modules of the processor 110 may be provided in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过显示屏180显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块140或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing. After the low-frequency baseband signal is processed by the baseband processor, it is passed to the application processor. The application processor displays images or videos through display screen 180 . In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 140 or other functional modules.
无线通信模块150可以提供应用在终端设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络,或Wi-Fi直连(Wi-Fi peer-to-peer,Wi-Fi p2p)),蓝牙(bluetooth,BT),超宽带(ultra wide band,UWB),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块150可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块150经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块150还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 150 may provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network, or Wi-Fi direct connection (Wi-Fi peer)) applied on the terminal device 100. -to-peer, Wi-Fi p2p)), Bluetooth (bluetooth, BT), ultra wide band (UWB), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), Near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 150 may be one or more devices integrating at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 150 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,终端设备100的天线1和移动通信模块140耦合,天线2和无线通信模块150耦合,使得终端设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the terminal device 100 is coupled to the mobile communication module 140, and the antenna 2 is coupled to the wireless communication module 150, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
终端设备100通过GPU,显示屏180,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏180和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The terminal device 100 implements display functions through a GPU, a display screen 180, an application processor, and the like. The GPU is an image processing microprocessor and is connected to the display screen 180 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
显示屏180用于显示图像,视频等。显示屏180包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),或者可以采用有机发光二极管(organic light-emitting diode,OLED),有 源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等材料制成。在一些实施例中,终端设备100可以包括1个或N个显示屏180,N为大于1的正整数。The display screen 180 is used to display images, videos, etc. Display 180 includes a display panel. The display panel can use a liquid crystal display (LCD) or an organic light-emitting diode (OLED). Source matrix organic light emitting diode or active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot Made of materials such as quantum dot light emitting diodes (QLED). In some embodiments, the terminal device 100 may include 1 or N display screens 180, where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当终端设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the terminal device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
视频编解码器用于对数字视频压缩或解压缩。终端设备100可以支持一种或多种视频编解码器。这样,终端设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural network (NN) computing processor. By drawing on the structure of biological neural networks, such as the transmission mode between neurons in the human brain, it can quickly process input information and can continuously learn by itself. The NPU can realize intelligent cognitive applications of the terminal device 100, such as image recognition, face recognition, speech recognition, text understanding, etc.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如图像播放功能等)等。存储数据区可存储终端设备100使用过程中所创建的数据等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行终端设备100的各种功能应用以及数据处理。Internal memory 121 may be used to store computer executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as an image playback function, etc.), etc. The storage data area may store data created during use of the terminal device 100 and the like. In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc. The processor 110 executes various functional applications and data processing of the terminal device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
压力传感器160A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器160A可以设置于显示屏180。压力传感器160A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器160A,电极之间的电容改变。终端设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏180,终端设备100根据压力传感器160A检测所述触摸操作强度。终端设备100也可以根据压力传感器160A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 160A is used to sense pressure signals and can convert the pressure signals into electrical signals. In some embodiments, the pressure sensor 160A may be disposed on the display screen 180 . There are many types of pressure sensors 160A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. A capacitive pressure sensor may include at least two parallel plates of conductive material. When a force is applied to pressure sensor 160A, the capacitance between the electrodes changes. The terminal device 100 determines the intensity of the pressure based on the change in capacitance. When a touch operation is performed on the display screen 180, the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 160A. The terminal device 100 may also calculate the touched position based on the detection signal of the pressure sensor 160A. In some embodiments, touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
陀螺仪传感器160B可以用于确定终端设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器160B确定终端设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器160B可以用于导航,体感游戏场景。The gyro sensor 160B may be used to determine the motion posture of the terminal device 100 . In some embodiments, the angular velocity of the terminal device 100 around three axes (ie, x, y, and z axes) may be determined by the gyro sensor 160B. The gyro sensor 160B can be used for navigation and somatosensory game scenes.
磁传感器160C包括霍尔传感器。终端设备100可以利用磁传感器160C检测翻盖皮套的开合。在一些实施例中,当终端设备100是翻盖机时,终端设备100可以根据磁传感器160C检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。Magnetic sensor 160C includes a Hall sensor. The terminal device 100 can detect the opening and closing of the flip holster using the magnetic sensor 160C. In some embodiments, when the terminal device 100 is a flip machine, the terminal device 100 may detect opening and closing of the flip according to the magnetic sensor 160C. Then, based on the detected opening and closing status of the leather case or the opening and closing status of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器160D可检测终端设备100在各个方向上(一般为三轴)加速度的大小。当终端设备100静止时可检测出重力的大小及方向。还可以用于识别终端设备姿态,应用于横竖屏切换,计步器 等应用。The acceleration sensor 160D can detect the acceleration of the terminal device 100 in various directions (generally three axes). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the terminal device, used in horizontal and vertical screen switching, pedometer and other applications.
触摸传感器160E,也称“触控器件”。触摸传感器160E可以设置于显示屏180,由触摸传感器160E与显示屏180组成触摸屏,也称“触控屏”。触摸传感器160E用于检测作用于其上或附近的触摸操作。触摸传感器160E可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏180提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器160E也可以设置于终端设备100的表面,与显示屏180所处的位置不同。Touch sensor 160E is also called a "touch device". The touch sensor 160E can be disposed on the display screen 180. The touch sensor 160E and the display screen 180 form a touch screen, which is also called a "touch screen". The touch sensor 160E is used to detect a touch operation on or near the touch sensor 160E. Touch sensor 160E may pass the detected touch operation to the application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 180 . In other embodiments, the touch sensor 160E may also be disposed on the surface of the terminal device 100 in a position different from that of the display screen 180 .
按键170包括开机键,音量键等。按键170可以是机械按键。也可以是触摸式按键。终端设备100可以接收按键输入,产生与终端设备100的用户设置以及功能控制有关的键信号输入。The buttons 170 include a power button, a volume button, etc. The key 170 may be a mechanical key. It can also be a touch button. The terminal device 100 may receive key input and generate key signal input related to user settings and function control of the terminal device 100 .
终端设备100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android***为例,示例性说明终端设备100的软件结构。The software system of the terminal device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of this application takes the Android system with a layered architecture as an example to illustrate the software structure of the terminal device 100 .
图2是本申请实施例的终端设备100的软件结构框图。Figure 2 is a software structure block diagram of the terminal device 100 according to the embodiment of the present application.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android***分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和***库,以及内核层。The layered architecture divides the software into several layers, and each layer has clear roles and division of labor. The layers communicate through software interfaces. In some embodiments, the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime and system libraries, and kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in Figure 2, the application package can include camera, gallery, calendar, calling, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (API) and programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图***,电话管理器,资源管理器,通知管理器等。As shown in Figure 2, the application framework layer can include a window manager, content provider, view system, phone manager, resource manager, notification manager, etc.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make this data accessible to applications. Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls that display text, controls that display pictures, etc. A view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
电话管理器用于提供终端设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide communication functions of the terminal device 100 . For example, call status management (including connected, hung up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端设备振动,指示灯闪烁等。The notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc. The notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a prompt sound is emitted, the terminal device vibrates, and the indicator light flashes, etc.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓***的调度和管理。Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java 文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and application framework layer run in virtual machines. The virtual machine combines the Java application layer and the application framework layer The file executes as a binary file. The virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
***库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (Media Libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,H.265,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, H.265, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
2D图形引擎是2D绘图的绘图引擎。2D Graphics Engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
下面以Android***为例对投屏方法的一般流程进行示例性说明。请参阅图3,图3示出了一种投屏方法的流程框图。该投屏方法可以应用于第一终端设备,以将第一终端设备中的待投屏界面投屏至第二终端设备进行同步显示。其中,待投屏界面可以是指第一终端设备的第一应用对应的界面,该界面可以是第一终端设备中正在显示的界面,也可以是即将在第一终端设备中进行显示的界面。The following takes the Android system as an example to illustrate the general process of the screencasting method. Please refer to Figure 3, which shows a flow chart of a screen casting method. The screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display. The interface to be projected may refer to an interface corresponding to the first application of the first terminal device. The interface may be an interface currently being displayed on the first terminal device, or may be an interface that is about to be displayed on the first terminal device.
如图3所示,在检测到投屏指令时,第一终端设备的第一应用可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据。其中,投屏指令用于指示将第一应用对应的待投屏界面投屏至第二终端设备进行同步显示。随后,第一终端设备的图像合成模块(例如第一终端设备的SurfaceFlinger组件等)可以获取图层数据,并对图层数据进行图像合成,得到图像A(即发送给第二终端设备进行同步显示的图像)和图像B(即第一终端设备自己进行显示的图像)。在得到图像A时,SurfaceFlinger组件可以将图像A发送给第一终端设备的编码器。编码器可以对图像A进行编码,得到视频流等编码数据,例如得到H.264或H.265等格式的视频流。然后,第一应用可以获取编码器编码的视频流等编码数据,并通过第一终端设备的传输模块将视频流等编码数据发送给第二终端设备。第二终端设备可以接收第一终端设备发送的视频流等编码数据,对视频流等编码数据进行解码,得到解码后的图像,并对解码后的图像进行显示。As shown in Figure 3, when detecting a screen casting instruction, the first application of the first terminal device can perform layer drawing on the interface to be projected to obtain layer data corresponding to the interface to be projected. The projection instruction is used to instruct the projection interface corresponding to the first application to be projected to the second terminal device for synchronous display. Subsequently, the image synthesis module of the first terminal device (such as the SurfaceFlinger component of the first terminal device, etc.) can obtain the layer data and perform image synthesis on the layer data to obtain image A (that is, sent to the second terminal device for synchronous display image) and image B (that is, the image displayed by the first terminal device itself). When image A is obtained, the SurfaceFlinger component can send image A to the encoder of the first terminal device. The encoder can encode the image A to obtain encoded data such as a video stream, for example, a video stream in a format such as H.264 or H.265. Then, the first application can obtain the encoded data such as the video stream encoded by the encoder, and send the encoded data such as the video stream to the second terminal device through the transmission module of the first terminal device. The second terminal device can receive the encoded data such as the video stream sent by the first terminal device, decode the encoded data such as the video stream, obtain a decoded image, and display the decoded image.
由图3可知,虽然SurfaceFlinger组件可以先对图层数据进行合成,得到图像A,然后再对图层数据进行合成,得到图像B,以减少合成图像A所需的等待时间,但SurfaceFlinger组件执行的合成过程(即合成图像A的过程)、编码器执行的编码过程(即对图像A进行编码的过程)和第一应用执行的发送过程(即将图像A对应的编码数据发送给第二终端设备的过程)一般是顺序执行的。即先由SurfaceFlinger组件对所有图层数据进行合成,得到完整的图像A。然后,再由编码器对完整的图像A进行编码,得到视频流等编码数据。最后,由第一应用将视频流等编码数据发送给第二终端设备。As can be seen from Figure 3, although the SurfaceFlinger component can first synthesize the layer data to obtain image A, and then synthesize the layer data to obtain image B to reduce the waiting time required to synthesize image A, the SurfaceFlinger component performs The synthesis process (i.e., the process of synthesizing image A), the encoding process performed by the encoder (i.e., the process of encoding image A), and the sending process performed by the first application (i.e., the encoding data corresponding to image A is sent to the second terminal device) The process) is generally executed sequentially. That is, the SurfaceFlinger component first synthesizes all layer data to obtain a complete image A. Then, the encoder encodes the complete image A to obtain encoded data such as a video stream. Finally, the first application sends the encoded data such as the video stream to the second terminal device.
其中,SurfaceFlinger组件合成得到图像A的时间、编码器对图像A进行编码的时间以及第一应用对视频流等编码数据进行发送的时间都比较长,例如SurfaceFlinger组件合成得到图像A的时间可以达到8ms,编码器对图像A进行编码的时间可以达到10ms,第一应用对图像A对应的视频流等编码数据进行发送的时间可以达到10ms,因此,在SurfaceFlinger组件执行的合成过程、编码器执行的编码过程和第一应用执行的发送过程顺序执行时,会导致投屏的时延较大,影响用户体验。Among them, the time for the SurfaceFlinger component to synthesize image A, the time for the encoder to encode image A, and the time for the first application to send encoded data such as video streams are all relatively long. For example, the time for the SurfaceFlinger component to synthesize image A can reach 8ms. , the time it takes for the encoder to encode image A can reach 10ms, and the time it takes for the first application to send the encoded data such as the video stream corresponding to image A can reach 10ms. Therefore, the synthesis process performed by the SurfaceFlinger component and the encoding performed by the encoder When the process and the sending process executed by the first application are executed sequentially, it will cause a large delay in screen casting and affect the user experience.
另外,为避免造成第一终端设备的显示异常,SurfaceFlinger组件进行图像合成的周期需与第一终端设备的刷新频率保持一样,即SurfaceFlinger组件在对图层数据进行合成时,还需要等待硬件合成模块(Hardware Composer,HWC)触发的Vsync信号,即SurfaceFlinger组件要在接收到Vsync信号 时,才开始对图层数据进行合成,并在一个Vsync信号周期内合成图像A和图像B。而HWC一般是周期性触发Vsync信号,如图3所示,在第一终端设备的刷新频率为60Hz时,Vsync信号的触发周期一般为16ms。也就是说,HWC一般间隔16ms触发一次Vsync信号,以通过Vsync信号通知SurfaceFlinger组件执行合成过程,导致SurfaceFlinger组件在对图层数据进行合成时,需要等待较长的时间(例如最长的等待时间可以达到16ms),导致投屏的时延较大,影响用户体验。In addition, in order to avoid display abnormalities on the first terminal device, the image synthesis cycle of the SurfaceFlinger component needs to be the same as the refresh frequency of the first terminal device. That is, when the SurfaceFlinger component synthesizes layer data, it also needs to wait for the hardware synthesis module. (Hardware Composer, HWC) triggered Vsync signal, that is, the SurfaceFlinger component must receive the Vsync signal When, the layer data starts to be synthesized, and image A and image B are synthesized within a Vsync signal period. HWC generally triggers the Vsync signal periodically. As shown in Figure 3, when the refresh frequency of the first terminal device is 60Hz, the triggering period of the Vsync signal is generally 16ms. In other words, HWC generally triggers the Vsync signal every 16ms to notify the SurfaceFlinger component to perform the synthesis process through the Vsync signal, causing the SurfaceFlinger component to wait for a long time when compositing layer data (for example, the longest waiting time can Reaching 16ms), resulting in a large delay in screencasting and affecting the user experience.
由上述描述可知,一般投屏流程的时延较大,无法满足用户的投屏需求,造成用户的投屏体验较差。基于此,本申请实施例提供了一种投屏方法,以有效降低投屏时延,提升用户体验。It can be seen from the above description that the general screencasting process has a large delay and cannot meet the user's screencasting needs, resulting in a poor screencasting experience for the user. Based on this, embodiments of the present application provide a screen casting method to effectively reduce screen casting delay and improve user experience.
下面结合附图和具体应用场景对本申请实施例提供的投屏方法进行详细说明。The screen projection method provided by the embodiment of the present application will be described in detail below with reference to the accompanying drawings and specific application scenarios.
请参阅图4,图4示出了本申请一实施例提供的投屏方法的流框程图。其中,该投屏方法可以应用于第一终端设备,以将第一终端设备中的待投屏界面投屏至第二终端设备进行同步显示。Please refer to FIG. 4 , which shows a flow chart of a screen projection method provided by an embodiment of the present application. Wherein, the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
如图4所示,在检测到投屏指令时,第一终端设备的第一应用可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据,并可以向第一终端设备的图像合成模块发送图层数据。图像合成模块可以将图层数据划分为至少两个部分,该至少两个部分可以包括第一部分和第二部分。随后,图像合成模块可以根据第一部分进行图像合成,得到第一图像,并向第一终端设备的编码器发送第一图像。编码器可以根据第一图像进行编码,得到第一编码数据,并向第一应用发送第一编码数据。第一应用接收到第一编码数据后,可以通过传输模块向第二终端设备发送第一编码数据。As shown in Figure 4, when detecting the screencasting instruction, the first application of the first terminal device can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and can send the screencasting command to the first terminal device. The image composition module sends layer data. The image synthesis module may divide the layer data into at least two parts, and the at least two parts may include a first part and a second part. Subsequently, the image synthesis module may perform image synthesis according to the first part, obtain the first image, and send the first image to the encoder of the first terminal device. The encoder may perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the first application. After receiving the first encoded data, the first application may send the first encoded data to the second terminal device through the transmission module.
其中,在编码器根据第一图像进行编码和/或在第一应用对第一编码数据进行发送时,图像合成模块还可以继续根据第二部分进行图像合成,得到第二图像,并可以继续向编码器发送第二图像。编码器接收到第二图像后,可以继续根据第二图像进行编码,得到第二编码数据,并向第一应用发送第二编码数据。第一应用接收到第二编码数据后,可以通过传输模块向第二终端设备发送第二编码数据。Wherein, when the encoder encodes according to the first image and/or the first application sends the first encoded data, the image synthesis module can also continue to perform image synthesis according to the second part to obtain the second image, and can continue to The encoder sends the second image. After receiving the second image, the encoder can continue to encode according to the second image, obtain the second encoded data, and send the second encoded data to the first application. After receiving the second encoded data, the first application can send the second encoded data to the second terminal device through the transmission module.
第二终端设备接收到第一终端设备发送的第一编码数据和第二编码数据时,可以对第一编码数据和第二编码数据进行解码,得到第一图像和第二图像,并对第一图像和第二图像进行拼接后显示。When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and perform the first The image and the second image are spliced and displayed.
示例性的,图像合成模块根据第一部分进行图像合成,得到第一图像,可以是直接对第一部分的所有图层数据进行图像合成,得到第一图像。或者,可以是对第一部分中的部分图层数据进行图像合成,得到第一图像,例如,可以根据当前帧的第一部分和前一帧的第一部分,确定区别部位,并对区别部位对应的图层数据进行图像合成,得到第一图像。其中,当前帧是指当前待发送给第二终端设备的待投屏界面,前一帧是指在发送当前帧之前,发送给第二终端设备的待投屏界面。因此,第二终端设备解码得到当前帧对应的第一图像时,可以根据前一帧对应的第一图像进行还原处理,得到完整的第一图像。类似的,图像合成模块根据第二部分进行图像合成,得到第二图像,可以是直接对第二部分的所有图层数据进行图像合成,得到第二图像;或者,可以是对第二部分中的部分图层数据进行图像合成,得到第二图像。For example, the image synthesis module performs image synthesis according to the first part to obtain the first image. It may directly perform image synthesis on all the layer data of the first part to obtain the first image. Alternatively, the first image may be obtained by performing image synthesis on part of the layer data in the first part. For example, the different parts may be determined based on the first part of the current frame and the first part of the previous frame, and the images corresponding to the different parts may be determined. The layer data is used for image synthesis to obtain the first image. The current frame refers to the interface to be projected that is currently to be sent to the second terminal device, and the previous frame refers to the interface to be projected that is sent to the second terminal device before the current frame is sent. Therefore, when the second terminal device decodes and obtains the first image corresponding to the current frame, it can perform restoration processing based on the first image corresponding to the previous frame to obtain the complete first image. Similarly, the image synthesis module performs image synthesis based on the second part to obtain the second image. It may directly perform image synthesis on all the layer data in the second part to obtain the second image; or it may perform image synthesis on all the layer data in the second part to obtain the second image. Image synthesis is performed on part of the layer data to obtain a second image.
示例性的,编码器根据第一图像进行编码,得到第一编码数据,可以是直接对第一图像的所有数据进行编码,得到第一编码数据。或者,可以是对第一图像中的部分数据进行编码,得到第一编码数据,例如,可以根据当前帧对应的第一图像与前一帧对应的第一图像,确定当前帧对应的第一图像与前一帧对应的第一图像之间的区别部分,并对区别部分对应的数据进行编码,得到第一编码数据。因此,第二终端设备得到当前帧对应的第一编码数据(即区别部分对应的第一编码数据)时,可以结合前一帧对应的第一编码数据来进行解码,得到完整的第一编码数据。类似的,编码器根据第二图像进行编码,得到第二编码数据,可以是直接对第二图像的所有数据进行编码,得到第二编码 数据;或者,可以是对第二图像中的部分数据进行编码,得到第二编码数据。For example, the encoder performs encoding according to the first image to obtain the first encoded data. It may directly encode all the data of the first image to obtain the first encoded data. Alternatively, part of the data in the first image may be encoded to obtain the first encoded data. For example, the first image corresponding to the current frame may be determined based on the first image corresponding to the current frame and the first image corresponding to the previous frame. The difference part between the first image corresponding to the previous frame, and the data corresponding to the difference part is encoded to obtain the first encoded data. Therefore, when the second terminal device obtains the first encoded data corresponding to the current frame (that is, the first encoded data corresponding to the difference part), it can decode it in combination with the first encoded data corresponding to the previous frame to obtain the complete first encoded data. . Similarly, the encoder performs encoding according to the second image to obtain the second encoded data. It may directly encode all the data of the second image to obtain the second encoded data. data; alternatively, part of the data in the second image may be encoded to obtain the second encoded data.
需要说明的是,电子设备结合前一帧对应的第一图像来对当前帧对应的第一图像进行还原的方式和结合前一帧对应的第一编码数据来对当前帧对应的第一编码数据进行解码,得到完整的第一编码数据的方式可以由技术人员根据实际场景具体确定,本申请实施例对此不作任何限制。It should be noted that the electronic device combines the first image corresponding to the previous frame to restore the first image corresponding to the current frame and combines the first encoded data corresponding to the previous frame to restore the first encoded data corresponding to the current frame. The method of decoding to obtain the complete first encoded data can be specifically determined by technicians based on actual scenarios, and the embodiments of the present application do not impose any restrictions on this.
本申请实施例中,可以将图层数据至少划分为第一部分和第二部分,并可以分别根据第一部分和第二部分进行图像合成、编码和发送等处理,而且根据第二部分进行的图像合成、编码、发送等处理过程与根据第一部分对应的第一图像进行的编码、发送等处理过程可以并行执行,以实现投屏时,图像合成模块执行的合成过程、编码器执行的编码过程和第一应用执行的发送过程三者的并行执行,从而可以有效降低投屏的时延,提升用户的投屏体验。In the embodiment of the present application, the layer data can be divided into at least a first part and a second part, and image synthesis, encoding, and sending can be performed based on the first part and the second part respectively, and the image synthesis can be performed according to the second part. , encoding, sending and other processing processes can be executed in parallel with the encoding, sending and other processing processes according to the first image corresponding to the first part, so as to realize the synthesis process executed by the image synthesis module, the encoding process executed by the encoder and the third The sending process of one application execution is executed in parallel, which can effectively reduce the screen casting delay and improve the user's screen casting experience.
其中,上述所述的投屏指令可以用于指示第一终端设备将第一应用对应的待投屏界面投屏至第二终端设备进行同步显示。第一应用可以为第一终端设备中的任一应用,即第一终端设备可以将任一应用的界面投屏至第二终端设备进行同步显示。示例性的,投屏指令可以由用户触发生成,也可以由第一终端设备默认生成。Wherein, the above-mentioned screen projection instruction can be used to instruct the first terminal device to project the screen interface to be projected corresponding to the first application to the second terminal device for synchronous display. The first application can be any application in the first terminal device, that is, the first terminal device can project the interface of any application to the second terminal device for synchronous display. For example, the screen projection instruction can be generated by triggering by the user, or can be generated by default by the first terminal device.
例如,当用户需要将第一终端设备当前显示的界面投屏至第二终端设备进行显示时,用户可以触摸第一终端设备中的投屏按钮。第一终端设备检测到投屏按钮被触摸时,可以生成投屏指令,以指示第一终端设备进行投屏操作。For example, when the user needs to project the interface currently displayed on the first terminal device to the second terminal device for display, the user can touch the projection button in the first terminal device. When the first terminal device detects that the screen casting button is touched, it may generate a screen casting instruction to instruct the first terminal device to perform a screen casting operation.
例如,当用户需要将第一终端设备当前显示的界面投屏至第二终端设备进行显示时,用户可以将第一终端设备的第一预设区域触碰第二终端设备的第二预设区域。第一终端设备检测到该触碰的操作时,可以生成投屏指令,以指示第一终端设备进行投屏操作。应理解,第一预设区域和第二预设区域可以根据实际情况具体设置,例如,可以将第一预设区域设置为第一终端设备中NFC芯片所对应的区域,可以将第二预设区域设置为第二终端设备中NFC芯片所对应的区域。For example, when the user needs to project the interface currently displayed on the first terminal device to the second terminal device for display, the user can touch the first preset area of the first terminal device to the second preset area of the second terminal device. . When the first terminal device detects the touch operation, it can generate a screen casting instruction to instruct the first terminal device to perform a screen casting operation. It should be understood that the first preset area and the second preset area can be specifically set according to the actual situation. For example, the first preset area can be set to the area corresponding to the NFC chip in the first terminal device, and the second preset area can be set to the area corresponding to the NFC chip in the first terminal device. The area is set to the area corresponding to the NFC chip in the second terminal device.
例如,用户可以在第一终端设备设置一个向第二终端设备自动投屏的时间(如可以将该时间设置为当天的21:00)。当到达该时间时,第一终端设备可以主动生成投屏指令,以指示第一终端设备执行投屏操作。For example, the user can set a time on the first terminal device to automatically project the screen to the second terminal device (for example, the time can be set to 21:00 of the day). When the time is reached, the first terminal device may actively generate a screen casting instruction to instruct the first terminal device to perform a screen casting operation.
示例性的,传输模块可以为有线通信模块,或者可以为移动通信模块,或者可以为无线通信模块。即第一应用可以通过USB等有线通信方式,或者可以通过2G/3G/4G/5G/6G等移动通信方式,或者可以通过蓝牙、Wi-Fi、Wi-Fi p2p、UWB等无线通信方式,将第一编码数据和第二编码数据发送给第二终端设备。For example, the transmission module may be a wired communication module, a mobile communication module, or a wireless communication module. That is, the first application can use wired communication methods such as USB, or mobile communication methods such as 2G/3G/4G/5G/6G, or wireless communication methods such as Bluetooth, Wi-Fi, Wi-Fi p2p, and UWB. The first encoded data and the second encoded data are sent to the second terminal device.
示例性的,图像合成模块可以为第一终端设备的SurfaceFlinger组件。以下将以图像合成模块为SurfaceFlinger组件为例进行示例性说明。For example, the image synthesis module may be the SurfaceFlinger component of the first terminal device. The following takes the image synthesis module as the SurfaceFlinger component as an example for illustrative explanation.
示例性的,图层数据是指待投屏界面具有的一个或多个图层对应的数据。将图层数据划分为第一部分和第二部分是指对每一个图层对应的数据进行划分,以将每一个图层对应的数据划分成数据A和数据B,并将所有图层对应的数据A统一确定为第一部分,将所有图层对应的数据B统一确定为第二部分,从而将待投屏界面分割成第一区域(即第一部分对应的图像区域)和第二区域(即第二部分对应的图像区域)。其中,第一部分与第二部分不重叠,即第一区域与第二区域不重叠。For example, layer data refers to data corresponding to one or more layers of the interface to be projected. Dividing the layer data into the first part and the second part means dividing the data corresponding to each layer, so as to divide the data corresponding to each layer into data A and data B, and divide the data corresponding to all layers into A is uniformly determined as the first part, and data B corresponding to all layers is uniformly determined as the second part, thereby dividing the interface to be projected into a first area (i.e., the image area corresponding to the first part) and a second area (i.e., the second corresponding image area). The first part and the second part do not overlap, that is, the first area and the second area do not overlap.
本申请实施例中,可以采用任一划分方式来对图层数据进行划分,具体的划分方式可以由技术人员根据实际场景具体设置。可选的,划分方式可以是等分划分,也可以是非等分划分。等分划分是指第一部分对应的第一区域的图像大小与第二部分对应的第二区域的图像大小相同。非等分划分是指第一部分对应的第一区域的图像大小与第二部分对应的第二区域的图像大小不相同。例如,可 以是第一区域的图像大小大于第二区域的图像大小,或者可以是第二区域的图像大小大于第一区域的图像大小。以下将以划分方式为等分划分为例进行示例性说明。In the embodiment of this application, any division method can be used to divide the layer data, and the specific division method can be set by technicians according to the actual scenario. Optionally, the division method may be equal division or non-equal division. Equal division means that the image size of the first region corresponding to the first part is the same as the image size of the second region corresponding to the second part. Non-equal division means that the image size of the first region corresponding to the first part is different from the image size of the second region corresponding to the second part. For example, you can The image size of the first area may be greater than the image size of the second area, or the image size of the second area may be greater than the image size of the first area. The following takes the dividing method as equal division as an example for illustrative explanation.
示例性的,请参阅图5,图5示出了本申请实施例提供的划分方式的应用场景示意图。该应用场景中,第一终端设备可以为手机,待投屏界面可以为相册界面。相册界面中可以包括多张图像以及相关的操作按钮(例如照片、相册、时刻和发现)。该应用场景通过对图层数据进行划分,得到的第一部分和第二部分在待投屏界面中所对应的区域来对划分方式进行示例性说明。For example, please refer to FIG. 5 , which shows a schematic diagram of the application scenario of the division method provided by the embodiment of the present application. In this application scenario, the first terminal device can be a mobile phone, and the interface to be projected can be a photo album interface. The album interface can include multiple images and related action buttons (such as photos, albums, moments, and discoveries). This application scenario exemplifies the division method by dividing the layer data and obtaining the corresponding areas of the first part and the second part in the interface to be projected.
如图5中的(a)所述,SurfaceFlinger组件可以通过横向划分方式对图层数据进行划分,即可以将每一个图层对应的数据划分为上侧对应的数据A和下侧对应的数据B,以此将待投屏界面分割成上侧的第一区域501(即虚线上侧的区域)和下侧的第二区域502(即虚线下侧的区域)。As shown in (a) in Figure 5, the SurfaceFlinger component can divide the layer data through horizontal division, that is, the data corresponding to each layer can be divided into data A corresponding to the upper side and data B corresponding to the lower side. , thereby dividing the interface to be projected into an upper first area 501 (ie, the area above the dotted line) and a lower second area 502 (ie, the area below the dotted line).
如图5中的(b)所示,SurfaceFlinger组件可以通过纵向划分方式对图层数据进行划分,即可以将每一个图层对应的数据划分为左侧对应的数据A和右侧对应的数据B,以此将待投屏界面分割成左侧的第一区域501(即虚线左侧的区域)和右侧的第二区域502(即虚线右侧的区域)。As shown in (b) in Figure 5, the SurfaceFlinger component can divide the layer data through vertical division, that is, the data corresponding to each layer can be divided into data A corresponding to the left and data B corresponding to the right. , thereby dividing the interface to be projected into a first area 501 on the left (ie, the area to the left of the dotted line) and a second area 502 on the right (ie, the area to the right of the dotted line).
如图5中的(c)所示,SurfaceFlinger组件可以通过对角划分方式对图层数据进行划分,即可以将每一个图层对应的数据划分为左上侧对应的数据A和右下侧对应的数据B,以此将待投屏界面分割分成左上侧的第一区域501(即虚线左上侧的区域)和右下侧的第二区域502(即虚线右下侧的区域)。As shown in (c) in Figure 5, the SurfaceFlinger component can divide the layer data by diagonal division, that is, the data corresponding to each layer can be divided into data A corresponding to the upper left side and data A corresponding to the lower right side. Data B is used to divide the interface to be projected into a first area 501 on the upper left side (i.e., the area on the upper left side of the dotted line) and a second area 502 on the lower right side (i.e., the area on the lower right side of the dotted line).
如图5中的(d)所示,SurfaceFlinger组件可以通过对角划分方式对图层数据进行划分,即可以将每一个图层对应的数据划分为左下侧对应的数据A和右上侧对应的数据B,以此将待投屏界面分割分成左下侧的第一区域501(即虚线左下侧的区域)和右上侧的第二区域502(即虚线右上侧的区域)。As shown in (d) in Figure 5, the SurfaceFlinger component can divide the layer data by diagonal division, that is, the data corresponding to each layer can be divided into data A corresponding to the lower left side and data corresponding to the upper right side. B, in this way, the interface to be projected is divided into a first area 501 on the lower left side (that is, the area on the lower left side of the dotted line) and a second area 502 on the upper right side (that is, the area on the upper right side of the dotted line).
应理解,在以等分划分的方式对图层数据进行划分,得到第一部分和第二部分时,如图4所示,SurfaceFlinger组件根据第一部分进行合成所需的时间和根据第二部分进行合成所需的时间均可以为4ms,编码器根据第一图像进行编码所需的时间和根据第二图像进行编码所需的时间均可以为5ms,第一应用向第二终端设备发送第一编码数据所需的时间和向第二终端设备发送第二编码数据所需的时间均可以为5ms。It should be understood that when the layer data is divided into equal parts to obtain the first part and the second part, as shown in Figure 4, the SurfaceFlinger component is synthesized according to the time required for the first part and the time required for synthesis according to the second part. The required time may both be 4 ms, the time required for the encoder to encode according to the first image and the time required for encoding according to the second image may both be 5 ms, and the first application sends the first encoded data to the second terminal device. Both the time required and the time required to send the second encoded data to the second terminal device may be 5 ms.
由图4可知,SurfaceFlinger组件根据第二部分进行合成的过程与编码器根据第一图像进行编码的过程可以并行执行,另外,编码器根据第二图像进行编码的过程与第一应用通过传输模块向第二终端设备发送第一编码数据的过程也可以并行执行,即本申请实施例,从进行图层数据的合成到完成所有编码数据的发送这一过程,所需的总时间T1为(4+5+5+5)ms。而图3所示的投屏方法完成这一过程,所需的总时间T0为(8+10+10)ms。显然,T1<T0,因此,本申请实施例提供的投屏方法所需的时间比图3所示的投屏方法所需的时间少,例如可以比图3所示的投屏方法所需的时间少9ms,也就是说,本申请实施例提供的投屏方法可以有效降低投屏的时延,提升用户的投屏体验。As can be seen from Figure 4, the process of synthesizing the SurfaceFlinger component according to the second part and the process of the encoder encoding according to the first image can be executed in parallel. In addition, the process of the encoder encoding according to the second image and the first application transmitting the data to the first image through the transmission module can be executed in parallel. The process of sending the first encoded data by the second terminal device can also be executed in parallel. That is, in the embodiment of the present application, the total time T1 required from the synthesis of layer data to the completion of the sending of all encoded data is (4+ 5+5+5)ms. The total time T0 required for the screen projection method shown in Figure 3 to complete this process is (8+10+10)ms. Obviously, T1 < T0. Therefore, the time required by the screen projection method provided by the embodiment of the present application is less than the time required by the screen projection method shown in Figure 3. For example, it can be longer than the time required by the screen projection method shown in Figure 3. The time is 9ms shorter. In other words, the screencasting method provided by the embodiment of the present application can effectively reduce the screencasting delay and improve the user's screencasting experience.
需要说明的是,上述所述的SurfaceFlinger组件将图层数据划分为第一部分和第二部分两个部分仅作示例性解释,不应理解为对本申请实施例的限制,本申请实施例中,SurfaceFlinger组件也可以将图层数据划分为三个或三个以上的部分。It should be noted that the above-mentioned SurfaceFlinger component divides the layer data into two parts: the first part and the second part. This is only for illustrative explanation and should not be understood as a limitation of the embodiment of the present application. In the embodiment of the present application, SurfaceFlinger Components can also divide layer data into three or more parts.
应理解,图层数据对应的划分数量可以由技术人员根据实际场景具体设置,本申请实施例对此不作任何限制。示例性的,技术人员可以根据第一终端设备的中央处理器(Central Processing Unit,CPU)性能和/或调度效率等来设置图层数据对应的划分数量。It should be understood that the number of divisions corresponding to the layer data can be specifically set by technicians according to actual scenarios, and the embodiments of the present application do not impose any restrictions on this. For example, technicians can set the number of divisions corresponding to the layer data based on the central processing unit (Central Processing Unit, CPU) performance and/or scheduling efficiency of the first terminal device.
例如,CPU性能较好的第一终端设备,数据处理能力和传输能力等相对较强,CPU性能较差的第一终端设备,数据处理能力和传输能力等相对较差。因此,对于CPU性能较好的第一终端设备,可以将设置较多的划分数量;对于CPU性能较差的第一终端设备,可以设置较少的划分数量。 For example, a first terminal device with a good CPU performance has relatively strong data processing capabilities and transmission capabilities, while a first terminal device with a poor CPU performance has relatively poor data processing capabilities and transmission capabilities. Therefore, for the first terminal device with better CPU performance, a larger number of divisions can be set; for the first terminal device with poor CPU performance, a smaller number of divisions can be set.
例如,调度效率较好的第一终端设备,数据处理能力和传输能力等相对较强,调度效率较差的第一终端设备,数据处理能力和传输能力等相对较差,因此,对于调度效率较好的第一终端设备,可以设置较多的划分数量;对于调度效率较差的第一终端设备,可以设置较少的划分数量。For example, a first terminal device with good scheduling efficiency has relatively strong data processing capabilities and transmission capabilities, while a first terminal device with poor scheduling efficiency has relatively poor data processing capabilities and transmission capabilities. Therefore, for a first terminal device with relatively poor scheduling efficiency, For a good first terminal device, a larger number of divisions can be set; for a first terminal device with poor scheduling efficiency, a smaller number of divisions can be set.
本申请实施例中,在将图层数据划分为第一部分和第二部分后,SurfaceFlinger组件可以先根据第一部分进行合成,得到第一图像(例如第一区域对应的图像),并可以将第一图像发送给编码器进行编码处理,以得到第一编码数据。在编码器根据第一图像进行编码时,surfaceFlinger组件可以继续根据第二部分进行合成,得到第二图像(例如第二区域对应的图像),并继续将第二图像发送给编码器进行编码处理,以得到第二编码数据。In the embodiment of this application, after dividing the layer data into the first part and the second part, the SurfaceFlinger component can first synthesize according to the first part to obtain the first image (for example, the image corresponding to the first area), and can combine the first part with the first part. The image is sent to the encoder for encoding processing to obtain first encoded data. When the encoder is encoding based on the first image, the surfaceFlinger component can continue to synthesize based on the second part to obtain the second image (for example, the image corresponding to the second area), and continue to send the second image to the encoder for encoding processing. to obtain the second encoded data.
应理解,SurfaceFlinger组件可以通过任一方式根据第一部分或者第二部分进行合成,编码器也可以采用任一编码方式根据第一图像或者第二图像进行编码,即本申请实施例对SurfaceFlinger组件的合成方式和编码器的编码方式不作任何限制,可以由技术人员根据实际场景具体设置。It should be understood that the SurfaceFlinger component can be synthesized based on the first part or the second part in any way, and the encoder can also use any encoding method to encode based on the first image or the second image, that is, the synthesis of the SurfaceFlinger component according to the embodiment of the present application. There are no restrictions on the encoding method and the encoding method of the encoder, and can be set by technicians according to the actual scenario.
示例性的,为方便第二终端设备对待投屏界面进行显示,第一图像的格式与第二图像的格式相同,例如第一图像和第二图像均可以为YUV格式的图像,或者第一图像和第二图像均可以为RGB格式的图像,等等。即SurfaceFlinger组件可以根据第一部分和第二部分分别合成得到YUV格式的图像,或者SurfaceFlinger组件可以根据第一部分和第二部分分别合成得到RGB格式的图像。For example, in order to facilitate the second terminal device to display the interface to be projected, the format of the first image is the same as the format of the second image. For example, both the first image and the second image can be images in YUV format, or the first image Both the second image and the second image can be images in RGB format, and so on. That is, the SurfaceFlinger component can synthesize an image in YUV format based on the first part and the second part respectively, or the SurfaceFlinger component can synthesize an image in RGB format based on the first part and the second part respectively.
类似的,第一编码数据的格式和第二编码数据的格式相同,例如第一编码数据和第二编码数据均可以为H.264格式的视频流,或者第一编码数据和第二编码数据均可以为H.265格式的视频流,等等。Similarly, the format of the first encoded data and the format of the second encoded data are the same. For example, both the first encoded data and the second encoded data can be video streams in H.264 format, or both the first encoded data and the second encoded data can be H.264 video streams. Can stream video in H.265 format, etc.
在一种可能的实现方式中,为使得第二终端设备在解码得到第一图像和第二图像后,可以正确拼接第一图像和第二图像,避免待投屏界面在第二终端设备中显示存在乱序问题,第一终端设备在通过编码器根据第一图像或第二图像进行编码时,可以增加对应的划分信息。因此,第二终端设备解码得到第一图像和第二图像后,可以根据划分信息对第一图像和第二图像进行准确拼接。In a possible implementation, in order to enable the second terminal device to correctly splice the first image and the second image after decoding the first image and the second image, and avoid the interface to be projected from being displayed in the second terminal device If there is an out-of-order problem, the first terminal device can add corresponding division information when encoding according to the first image or the second image through the encoder. Therefore, after the second terminal device decodes the first image and the second image, it can accurately splice the first image and the second image according to the division information.
其中,划分信息用于描述图层数据对应的划分方式(也即待投屏界面对应的分割方式)和该图像(即第一图像或第二图像等)在待投屏界面中的位置信息。The division information is used to describe the division method corresponding to the layer data (that is, the division method corresponding to the interface to be projected) and the position information of the image (i.e., the first image or the second image, etc.) in the interface to be projected.
示例性的,划分信息中可以包括划分方式、图像序号和图像总数等。其中,图像序号用于描述各图像在待投屏界面中的位置信息。图像总数用于描述将待投屏界面分割成多少张图像(或者也可以称为区域)。例如,当待投屏界面被分割成两张图像时,图像总数可以为2。例如,当待投屏界面被分割成三张图像时,图像总数可以为3。例如,当待投屏界面没有被分割时,图像总数可以为1。For example, the division information may include division mode, image serial number, total number of images, etc. The image serial number is used to describe the position information of each image in the interface to be projected. The total number of images is used to describe how many images (or regions) the interface to be projected is divided into. For example, when the interface to be projected is divided into two images, the total number of images can be 2. For example, when the interface to be projected is divided into three images, the total number of images can be 3. For example, when the interface to be projected is not divided, the total number of images can be 1.
例如,在通过横向划分方式,将待投屏界面由上至下分割成第一图像、第二图像和第三图像时,在根据第一图像进行编码时,编码器可以增加第一图像对应的划分信息。因此,第一图像对应的第一编码数据中不仅可以包括第一图像,还可以包括划分方式对应的字段内容“横向划分”、图像序号对应的字段内容“1”以及图像总数对应的字段内容“3”。同样的,在根据第二图像进行编码时,编码器可以增加第二图像对应的划分信息。因此,第二图像对应的第二编码数据中不仅可以包括第二图像,还可以包括划分方式对应的字段内容“横向划分”、图像序号对应的字段内容“2”以及图像总数对应的字段内容“3”。在根据第三图像进行编码时,编码器可以增加第三图像对应的划分信息。因此,第三图像对应的第三编码数据中不仅可以包括第三图像,还可以包括划分方式对应的字段内容“横向划分”、图像序号对应的字段内容“3”和图像总数对应的字段内容“3”。For example, when the interface to be projected is divided into a first image, a second image and a third image from top to bottom through horizontal division, when encoding based on the first image, the encoder can add the corresponding Divide information. Therefore, the first encoded data corresponding to the first image may not only include the first image, but also include the field content "horizontal division" corresponding to the division method, the field content "1" corresponding to the image serial number, and the field content "1" corresponding to the total number of images. 3". Similarly, when encoding according to the second image, the encoder can add division information corresponding to the second image. Therefore, the second coded data corresponding to the second image may not only include the second image, but also include the field content "horizontal division" corresponding to the division method, the field content "2" corresponding to the image serial number, and the field content "2" corresponding to the total number of images. 3". When encoding according to the third image, the encoder may add division information corresponding to the third image. Therefore, the third encoded data corresponding to the third image may not only include the third image, but also include the field content "horizontal division" corresponding to the division method, the field content "3" corresponding to the image serial number, and the field content "3" corresponding to the total number of images. 3".
第二终端设备获取第一编码数据后,可以对第一编码数据进行解码,得到第一图像以及划分方式(即横向划分)、第一图像对应的图像序号(即1)以及图像总数(即3)。同样的,第二终端设备获取第 二编码数据后,可以对第二编码数据进行解码,得到第二图像以及划分方式(即横向划分)、第二图像对应的图像序号(即2)以及图像总数(即3)。第二终端设备获取第三编码数据后,可以对第三编码数据进行解码,得到第三图像以及划分方式(即横向划分)、第三图像对应的图像序号(即3)以及图像总数(即3)。此时,第二终端设备根据划分方式和各图像序号可以确定第一图像为待投屏界面上侧的图像、第二图像为待投屏界面中间的图像、第三图像为待投屏界面下侧的图像。因此,第二终端设备可以将第一图像、第二图像和第三图像按照从上至下的位置关系进行拼接,并对拼接得到的图像进行显示。After acquiring the first encoded data, the second terminal device can decode the first encoded data to obtain the first image and the division method (i.e., horizontal division), the image serial number corresponding to the first image (i.e., 1), and the total number of images (i.e., 3 ). Similarly, the second terminal device obtains the After the second encoded data, the second encoded data can be decoded to obtain the second image and the division method (ie, horizontal division), the image serial number corresponding to the second image (ie, 2), and the total number of images (ie, 3). After acquiring the third encoded data, the second terminal device can decode the third encoded data to obtain the third image and the division method (i.e., horizontal division), the image serial number corresponding to the third image (i.e., 3), and the total number of images (i.e., 3 ). At this time, the second terminal device can determine based on the division method and the serial number of each image that the first image is the image on the upper side of the interface to be projected, the second image is the image in the middle of the interface to be projected, and the third image is the image below the interface to be projected. side image. Therefore, the second terminal device can splice the first image, the second image and the third image according to the positional relationship from top to bottom, and display the spliced image.
在另一种可能的实现方式,第一终端设备确定划分方式后,可以将划分方式对应的划分信息单独发送给第二终端设备。例如,在进行投屏之前,第一终端设备可以先将划分信息发送给第二终端设备。因此,在根据各图像进行编码时,编码器不需要再单独为各图像添加划分信息,以减少编码过程中的信息添加,从而提高编码速度。此时,划分信息中可以包括划分方式、图像发送方式以及图像总数等。In another possible implementation, after the first terminal device determines the division method, the division information corresponding to the division method can be sent to the second terminal device separately. For example, before performing screencasting, the first terminal device may first send the division information to the second terminal device. Therefore, when encoding according to each image, the encoder no longer needs to add division information to each image separately, so as to reduce the addition of information during the encoding process, thereby improving the encoding speed. At this time, the division information may include division method, image transmission method, total number of images, etc.
其中,图像发送方式用于描述第一终端设备发送各图像的顺序,以表征各图像对应的图像序号,从而标识各图像在待投屏界面中的位置信息。The image sending method is used to describe the order in which the first terminal device sends each image to represent the image serial number corresponding to each image, thereby identifying the position information of each image in the interface to be projected.
例如,在通过纵向划分方式,将待投屏界面由左至右分割成第一图像和第二图像时,图像发送方式可以为由左至右的顺序,即第一终端设备可以按照从左至右的顺序,先将第一图像对应的第一编码数据发送给第二终端设备,然后再将第二图像对应的第二编码数据发送给第二终端设备。For example, when the interface to be projected is divided into a first image and a second image from left to right through vertical division, the image sending method may be in the order from left to right, that is, the first terminal device may be sent in the order from left to right. In the right order, the first encoded data corresponding to the first image is first sent to the second terminal device, and then the second encoded data corresponding to the second image is sent to the second terminal device.
也就是说,第二终端设备可以依次获取第一编码数据和第二编码数据。此时,第二终端设备根据事先获取的划分信息和编码数据的获取顺序,可以确定第一编码数据对应的第一图像为待显示界面左侧的图像,第二编码数据对应的第二图像为待投屏界面右侧的图像。因此,第二终端设备可以将第一图像和第二图像按照从左至右的位置关系进行拼接,并对拼接得到的图像进行显示。That is to say, the second terminal device can obtain the first encoded data and the second encoded data in sequence. At this time, the second terminal device can determine that the first image corresponding to the first encoded data is the image on the left side of the interface to be displayed, and the second image corresponding to the second encoded data is based on the previously obtained division information and the acquisition sequence of the encoded data. The image on the right side of the interface to be projected. Therefore, the second terminal device can splice the first image and the second image according to the positional relationship from left to right, and display the spliced image.
示例性的,图像发送方式也可以根据划分方式默认设置。此时,第一终端设备事先发送给第二终端设备的划分信息可以仅包括划分方式和图像总数。例如,在通过横向划分方式进行划分时,图像发送方式可以默认设置为由上至下的顺序,即第一终端设备可以默认按照从上至下的顺序,将各图像对应的编码数据发送给第二终端设备。例如,在通过纵向划分方式进行划分时,图像发送方式可以默认设置为由左至右的顺序,即第一终端设备可以默认按照从左至右的顺序,将各图像对应的编码数据发送给第二终端设备。因此,第二终端设备依次获取各图像对应的编码数据后,可以根据划分方式和默认的图像发送方式,确定各图像在待投屏界面中的位置信息,并可以根据位置信息对解码得到的各图像进行拼接,并对拼接得到的图像进行显示。For example, the image sending method can also be set by default according to the dividing method. At this time, the division information sent by the first terminal device to the second terminal device in advance may only include the division method and the total number of images. For example, when dividing by horizontal division, the image sending mode may be set to a top-to-bottom order by default. That is, the first terminal device may send the encoded data corresponding to each image to the third terminal device in a top-to-bottom order by default. 2. Terminal equipment. For example, when dividing by vertical division, the image sending mode can be set to the order from left to right by default, that is, the first terminal device can send the encoded data corresponding to each image to the third terminal in the order from left to right by default. 2. Terminal equipment. Therefore, after the second terminal device obtains the encoded data corresponding to each image in sequence, it can determine the position information of each image in the interface to be projected according to the dividing method and the default image sending method, and can decode each decoded data according to the position information. The images are spliced and the spliced images are displayed.
例如,在通过纵向划分方式,将待投屏界面由左至右分割成第一图像、第二图像和第三图像时,图像发送方式可以默认为由左至右的顺序,即第一终端设备可以默认按照从左至右的顺序,将第一图像对应的第一编码数据、第二图像对应的第二编码数据和第三图像对应的第三编码数据依次发送给第二终端设备。也就是说,第二终端设备可以依次获取第一编码数据、第二编码数据和第三编码数据,此时,第二终端设备可以确定第一编码数据对应的第一图像为待投屏界面左侧的图像,第二编码数据对应的第二图像为待投屏界面中间的图像,第三编码数据对应的第三图像为待投屏界面右侧的图像,并可以基于此对第一图像、第二图像和第三图像进行拼接后显示。For example, when the interface to be projected is divided into a first image, a second image, and a third image from left to right through vertical division, the image sending method may default to the order from left to right, that is, the first terminal device By default, the first encoded data corresponding to the first image, the second encoded data corresponding to the second image, and the third encoded data corresponding to the third image may be sent to the second terminal device in sequence from left to right. That is to say, the second terminal device can obtain the first encoded data, the second encoded data and the third encoded data in sequence. At this time, the second terminal device can determine that the first image corresponding to the first encoded data is the left image of the interface to be projected. The second image corresponding to the second encoded data is the image in the middle of the interface to be projected, and the third image corresponding to the third encoded data is the image to the right side of the interface to be projected, and based on this, the first image, The second image and the third image are spliced and displayed.
在一种可能的实现方式中,SurfaceFlinger组件可以对图层数据只进行一次合成处理,并可以将合成得到的图像直接用于第一终端设备的显示以及直接用于第二终端设备的显示,以减少SurfaceFlinger组件进行图像合成的次数,降低第一终端设备的CPU、内存和功耗等资源的浪费。In one possible implementation, the SurfaceFlinger component can synthesize the layer data only once, and the synthesized image can be directly used for display on the first terminal device and directly for display on the second terminal device, so as to Reduce the number of image synthesis performed by the SurfaceFlinger component and reduce the waste of resources such as CPU, memory and power consumption of the first terminal device.
由前述描述以及图3和图4可知,SurfaceFlinger组件进行图像合成的周期需与第一终端设备的 刷新频率保持一样,以避免造成第一终端设备的显示异常,即SurfaceFlinger组件获取图层数据后,一般需要等待Vsync信号,即需要在检测到Vsync信号时,才会对图层数据进行合成,使得SurfaceFlinger组件在对图层数据进行合成时,需要等待较长的时间(例如最长的等待时间可以达到16ms),也会导致投屏的时延较大,影响用户体验。It can be seen from the foregoing description and Figures 3 and 4 that the image synthesis cycle of the SurfaceFlinger component needs to be consistent with that of the first terminal device. The refresh frequency remains the same to avoid display abnormalities on the first terminal device. That is, after the SurfaceFlinger component obtains the layer data, it generally needs to wait for the Vsync signal, that is, the layer data needs to be synthesized only when the Vsync signal is detected, so that When the SurfaceFlinger component synthesizes layer data, it needs to wait for a long time (for example, the longest waiting time can reach 16ms), which will also cause a large delay in screen projection and affect the user experience.
请参阅图6,图6示出了本申请另一实施例提供的投屏方法的流程框图。其中,该投屏方法可以应用于第一终端设备,以将第一终端设备中的待投屏界面投屏至第二终端设备进行同步显示。Please refer to FIG. 6 , which shows a flow chart of a screen projection method provided by another embodiment of the present application. Wherein, the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
如图6所示,在检测到投屏指令时,第一终端设备的第一应用可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据,并向第一终端设备的SurfaceFlinger组件发送图层数据。同时,第一终端设备可以确定第一终端设备当前的显示状态。As shown in Figure 6, when detecting the screencasting instruction, the first application of the first terminal device can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and send it to the first terminal device. SurfaceFlinger component sends layer data. At the same time, the first terminal device can determine the current display status of the first terminal device.
当第一终端设备当前的显示状态为熄屏状态时,第一终端设备可以指示SurfaceFlinger组件不需要等待Vsync信号。因此,SurfaceFlinger组件获取图层数据后,可以直接根据图层数据进行图像合成,得到图像,而不用等待Vsync信号。随后,SurfaceFlinger组件可以向第一终端设备的编码器发送图像。When the current display state of the first terminal device is the screen-off state, the first terminal device may instruct the SurfaceFlinger component not to wait for the Vsync signal. Therefore, after the SurfaceFlinger component obtains the layer data, it can directly perform image synthesis based on the layer data to obtain the image without waiting for the Vsync signal. Subsequently, the SurfaceFlinger component can send the image to the encoder of the first terminal device.
编码器获取图像后,可以根据图像进行编码,得到编码数据,并可以向第一应用发送编码数据。第一应用接收到编码数据后,可以通过传输模块向第二终端设备发送编码数据。第二终端设备接收到第一终端设备发送的编码数据时,可以对编码数据进行解码,并对解码得到的图像进行显示。After the encoder obtains the image, it can perform encoding according to the image, obtain encoded data, and can send the encoded data to the first application. After receiving the encoded data, the first application can send the encoded data to the second terminal device through the transmission module. When the second terminal device receives the encoded data sent by the first terminal device, it can decode the encoded data and display the decoded image.
即该实施例提供的投屏方法中,在投屏过程时,第一终端设备可以实时获取第一终端设备当前的显示状态。显示状态可以包括熄屏状态和亮屏状态。当显示状态为熄屏状态时,表明第一终端设备不需要对待投屏界面进行同步显示,也就是说,SurfaceFlinger组件可以只需要合成在第二终端设备进行显示的图像,而不需要合成在第一终端设备进行显示的图像。此时,SurfaceFlinger组件进行图像合成的周期与第一终端设备的刷新频率是否保持一致,并不会造成第一终端设备的显示异常,也不会对第二终端设备的显示带来影响。因此,当第一终端设备的显示状态为熄屏状态时,SurfaceFlinger组件获取图层数据后,可以直接根据图层数据进行图像合成,而不需要等待Vsync信号,以有效减少SurfaceFlinger组件的等待时间,降低投屏的时延。That is, in the screen casting method provided by this embodiment, during the screen casting process, the first terminal device can obtain the current display status of the first terminal device in real time. The display status can include screen-off status and screen-on status. When the display state is the screen-off state, it indicates that the first terminal device does not need to display the interface to be projected simultaneously. In other words, the SurfaceFlinger component only needs to synthesize the image displayed on the second terminal device, and does not need to synthesize the image displayed on the second terminal device. An image displayed by a terminal device. At this time, whether the image synthesis cycle of the SurfaceFlinger component is consistent with the refresh frequency of the first terminal device will not cause abnormal display of the first terminal device, nor will it affect the display of the second terminal device. Therefore, when the display state of the first terminal device is the screen-off state, after the SurfaceFlinger component obtains the layer data, it can directly perform image synthesis based on the layer data without waiting for the Vsync signal, thereby effectively reducing the waiting time of the SurfaceFlinger component. Reduce screencasting latency.
在一种可能的实现方式中,在检测到投屏指令时,第一终端设备可以确定待投屏界面对应的第一应用是否为目标应用。其中,目标应用为没有帧率控制的应用,例如某些游戏应用和壁纸类应用。帧率是指以帧为单位的图像连续出现在显示界面上的频率。目标应用可以由技术人员根据实际场景具体设置。In a possible implementation, when detecting the screen casting instruction, the first terminal device may determine whether the first application corresponding to the interface to be cast is the target application. Among them, the target applications are applications without frame rate control, such as some game applications and wallpaper applications. Frame rate refers to the frequency with which images in frames appear continuously on the display interface. The target application can be specifically set by technicians based on actual scenarios.
当第一应用不是目标应用时,第一终端设备可以检测第一终端设备当前的显示状态。当第一终端设备当前的显示状态为熄屏状态时,第一终端设备可以指示SurfaceFlinger组件不需要等待Vsync信号,即可以及时根据获取的图层数据进行图像合成。When the first application is not the target application, the first terminal device may detect the current display state of the first terminal device. When the current display state of the first terminal device is the screen-off state, the first terminal device can instruct the SurfaceFlinger component to perform image synthesis in a timely manner based on the obtained layer data without waiting for the Vsync signal.
当第一应用是目标应用时,由于这些没有帧率控制的目标应用进行图层绘制的速度一般较快,若不基于Vsync信号进行图像合成,会导致SurfaceFlinger组件来不及对图层数据进行处理,造成界面显示跳帧等问题。此时,SurfaceFlinger组件需要等待Vsync信号,以在接收到Vsync信号时对图层数据进行图像合成。When the first application is the target application, because these target applications without frame rate control generally draw layers faster, if the image synthesis is not based on the Vsync signal, the SurfaceFlinger component will not have time to process the layer data, resulting in The interface shows issues such as frame skipping. At this time, the SurfaceFlinger component needs to wait for the Vsync signal to perform image synthesis on the layer data when the Vsync signal is received.
请参阅图7,图7示出了本申请另一实施例提供的投屏方法的流程框图。其中,该投屏方法可以应用于第一终端设备,以将第一终端设备中的待投屏界面投屏至第二终端设备进行同步显示。Please refer to FIG. 7 , which shows a flow chart of a screen projection method provided by another embodiment of the present application. Wherein, the screen projection method can be applied to the first terminal device to project the interface to be projected in the first terminal device to the second terminal device for synchronous display.
如图7所示,在检测到投屏指令时,第一终端设备的第一应用可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据,并可以向第一终端设备的SurfaceFlinger组件发送图层数据。同时, 第一终端设备可以确定第一终端设备当前的显示状态。As shown in Figure 7, when detecting the screencasting instruction, the first application of the first terminal device can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and can send the screencasting command to the first terminal device. The SurfaceFlinger component sends layer data. at the same time, The first terminal device may determine the current display status of the first terminal device.
当第一终端设备当前的显示状态为熄屏状态时,第一终端设备可以指示SurfaceFlinger组件不需要等待Vsync信号,因此,SurfaceFlinger组件获取图层数据后,可以直接将图层数据至少划分为第一部分和第二部分,并可以直接根据第一部分进行图像合成,得到第一图像,而不用等待Vsync信号,以减少SurfaceFlinger组件的等待时间,降低投屏的时延。随后,SurfaceFlinger组件可以向第一终端设备的编码器发送第一图像。When the current display state of the first terminal device is the screen-off state, the first terminal device can instruct the SurfaceFlinger component not to wait for the Vsync signal. Therefore, after the SurfaceFlinger component obtains the layer data, it can directly divide the layer data into at least the first part. and the second part, and can directly perform image synthesis based on the first part to obtain the first image without waiting for the Vsync signal, thereby reducing the waiting time of the SurfaceFlinger component and reducing the screen projection delay. Subsequently, the SurfaceFlinger component may send the first image to the encoder of the first terminal device.
编码器获取第一图像后,可以根据第一图像进行编码,得到第一编码数据,并向第一应用发送第一编码数据。第一应用接收到第一编码数据后,可以通过传输模块向第二终端设备发送第一编码数据。After acquiring the first image, the encoder can perform encoding according to the first image, obtain the first encoded data, and send the first encoded data to the first application. After receiving the first encoded data, the first application may send the first encoded data to the second terminal device through the transmission module.
其中,在编码器根据第一图像进行编码和/或在第一应用对第一编码数据进行发送时,图像合成模块还可以继续根据第二部分进行图像合成,得到第二图像,并可以继续向编码器发送第二图像。编码器也可以继续根据第二图像进行编码,得到第二编码数据,并可以向第一应用发送第二编码数据。第一应用接收到第二编码数据后,可以通过传输模块向第二终端设备发送第二编码数据。Wherein, when the encoder encodes according to the first image and/or the first application sends the first encoded data, the image synthesis module can also continue to perform image synthesis according to the second part to obtain the second image, and can continue to The encoder sends the second image. The encoder may also continue to encode according to the second image to obtain second encoded data, and may send the second encoded data to the first application. After receiving the second encoded data, the first application can send the second encoded data to the second terminal device through the transmission module.
第二终端设备接收到第一终端设备发送的第一编码数据和第二编码数据时,可以对第一编码数据和第二编码数据进行解码,得到第一图像和第二图像,并对第一图像和第二图像进行拼接后显示。When the second terminal device receives the first encoded data and the second encoded data sent by the first terminal device, it can decode the first encoded data and the second encoded data to obtain the first image and the second image, and perform the first The image and the second image are spliced and displayed.
该实施例中,在第一终端设备当前的显示状态为熄屏状态时,第一终端设备可以通过SurfaceFlinger组件执行的合成过程、编码器执行的编码过程和第一应用执行的发送过程三者的并行执行以及指示SurfaceFlinger组件不需等待Vsync信号,来降低投屏的时延,提升用户的投屏体验。In this embodiment, when the current display state of the first terminal device is the screen-off state, the first terminal device can use the synthesis process performed by the SurfaceFlinger component, the encoding process performed by the encoder, and the sending process performed by the first application. Parallel execution and instructing the SurfaceFlinger component do not need to wait for the Vsync signal to reduce the screencasting delay and improve the user's screencasting experience.
下面将结合上述的描述对本申请实施例提供的一种投屏方法进行示例性说明。请参阅图8,图8示出了本申请一实施例提供的一种投屏方法的示意性流程图。如图8所示,该方法可以包括:A screen projection method provided by the embodiment of the present application will be exemplified below in conjunction with the above description. Please refer to FIG. 8 , which shows a schematic flow chart of a screen projection method provided by an embodiment of the present application. As shown in Figure 8, the method may include:
S801、第一终端设备检测到投屏指令。S801. The first terminal device detects the screen projection command.
其中,投屏指令用于指示第一终端设备将第一应用对应的待投屏界面投屏至第二终端设备进行同步显示。第一应用可以为第一终端设备中的任一应用。The projection instruction is used to instruct the first terminal device to project the to-be-projected interface corresponding to the first application to the second terminal device for synchronous display. The first application may be any application in the first terminal device.
示例性的,投屏指令可以由用户触发生成,也可以由第一终端设备默认生成,具体内容可以参见前述有关生成投屏指令的具体描述,在此不再赘述。For example, the screen projection instruction may be generated by triggering by the user, or may be generated by default by the first terminal device. For details, please refer to the foregoing specific description of generating the screen projection instruction, which will not be described again here.
S802、第一终端设备的第一应用向第一终端设备的SurfaceFlinger组件发送待投屏界面对应的图层数据。S802. The first application of the first terminal device sends the layer data corresponding to the interface to be projected to the SurfaceFlinger component of the first terminal device.
可选的,在第一终端设备检测到投屏指令时,第一终端设备的第一应用可以对待投屏界面进行图层绘制,得到待投屏界面对应的图层数据,并向第一终端设备的SurfaceFlinger组件发送图层数据。Optionally, when the first terminal device detects the screen casting instruction, the first application of the first terminal device can perform layer drawing on the interface to be projected, obtain the layer data corresponding to the interface to be projected, and send it to the first terminal. The device's SurfaceFlinger component sends layer data.
其中,待投屏界面可以是指第一应用对应的界面,该界面可以是第一终端设备中正在显示的界面,也可以是即将在第一终端设备中进行显示的界面。The interface to be projected may refer to an interface corresponding to the first application. The interface may be an interface currently being displayed on the first terminal device, or may be an interface that is about to be displayed on the first terminal device.
应理解,第一应用可以采用任一方式对待投屏界面进行图层绘制,得到图层数据,本申请实施例对此不作任何限制。It should be understood that the first application can use any method to draw layers on the interface to be projected to obtain layer data, and the embodiments of the present application do not impose any restrictions on this.
S803、SurfaceFlinger组件根据图层数据中的第一部分进行图像合成,得到第一图像,图层数据至少包括第一部分和第二部分。S803. The SurfaceFlinger component performs image synthesis based on the first part of the layer data to obtain the first image. The layer data includes at least the first part and the second part.
示例性的,图层数据是指待投屏界面具有的一个或多个图层对应的数据。在一种可能的实现方式中,SurfaceFlinger组件可以将图层数据至少划分为第一部分和第二部分。其中,将图层数据划分为第一部分和第二部分是指对每一个图层对应的数据进行划分,以将每一个图层对应的数据划分成数据A和数据B,并将所有图层对应的数据A统一确定为第一部分,将所有图层对应的数据B统一 确定为第二部分,从而将待投屏界面分割成第一区域(即第一部分对应的图像区域)和第二区域(即第二部分对应的图像区域)。其中,第一部分与第二部分不重叠,即第一区域与第二区域不重叠。For example, layer data refers to data corresponding to one or more layers of the interface to be projected. In one possible implementation, the SurfaceFlinger component can divide the layer data into at least a first part and a second part. Among them, dividing the layer data into the first part and the second part means dividing the data corresponding to each layer, so as to divide the data corresponding to each layer into data A and data B, and to divide the corresponding data of all layers into data A and data B. The data A of is unified and determined as the first part, and the data B corresponding to all layers are unified. It is determined to be the second part, thereby dividing the interface to be projected into a first area (that is, the image area corresponding to the first part) and a second area (that is, the image area corresponding to the second part). The first part and the second part do not overlap, that is, the first area and the second area do not overlap.
可以理解的是,SurfaceFlinger组件可以采用任一划分方式对图层数据进行划分,具体的划分方式可以由技术人员根据实际场景具体设置。其中,有关划分方式的具体内容可以参照前述划分方式的具体描述,在此不再赘述,例如可以参照图5所示的划分方式对图层数据进行划分。It can be understood that the SurfaceFlinger component can use any division method to divide the layer data, and the specific division method can be set by technicians according to the actual scenario. For details about the division method, please refer to the specific description of the aforementioned division method, which will not be described again here. For example, the layer data can be divided with reference to the division method shown in FIG. 5 .
示例性的,SurfaceFlinger组件根据第一部分进行图像合成的具体内容可以参照前述有关合成第一图像的具体描述,在此不再赘述。例如,SurfaceFlinger组件可以将第一部分合成为YUV格式的图像,或者可以将第一部分合成为RGB格式的图像。For example, for the specific content of image synthesis performed by the SurfaceFlinger component according to the first part, reference can be made to the foregoing specific description of synthesizing the first image, which will not be described again here. For example, the SurfaceFlinger component can composite the first part into an image in YUV format, or it can composite the first part into an image in RGB format.
S804、第一终端设备的编码器根据第一图像进行编码,得到第一编码数据。S804. The encoder of the first terminal device performs encoding according to the first image to obtain the first encoded data.
可选的,SurfaceFlinger组件根据第一部分进行图像合成,得到第一图像后,可以将第一图像发送给第一终端设备的编码器,以使得编码器可以根据第一图像进行编码,得到第一编码数据。Optionally, the SurfaceFlinger component performs image synthesis based on the first part. After obtaining the first image, the first image can be sent to the encoder of the first terminal device, so that the encoder can encode according to the first image to obtain the first encoding. data.
应理解,编码器根据第一图像进行编码的具体内容可以参照前述有关根据第一图像进行编码的具体描述,在此不再赘述。例如,编码器可以将第一图像编码为H.264格式的视频流,或者将第一图像编码为H.265格式的视频流。It should be understood that for the specific content of encoding by the encoder based on the first image, reference can be made to the foregoing specific description of encoding based on the first image, and will not be described again here. For example, the encoder may encode the first image into a video stream in H.264 format, or encode the first image into a video stream in H.265 format.
S805、第一应用向第二终端设备发送第一编码数据。S805. The first application sends the first encoded data to the second terminal device.
可选的,编码器根据第一图像进行编码,得到第一编码数据后,可以将第一编码数据发送给第一应用,以使得第一应用将第一编码数据发送给第二终端设备。Optionally, the encoder performs encoding according to the first image, and after obtaining the first encoded data, may send the first encoded data to the first application, so that the first application sends the first encoded data to the second terminal device.
本申请实施例中,第一应用可以通过传输模块将第一编码数据发送给第二终端设备。传输模块可以为有线通信模块,或者可以为移动通信模块,或者可以为无线通信模块。即第一应用可以通过USB等有线通信方式,或者可以通过2G/3G/4G/5G/6G等移动通信方式,或者可以通过蓝牙、Wi-Fi、Wi-Fi p2p、UWB等无线通信方式,将第一编码数据发送给第二终端设备。In this embodiment of the present application, the first application may send the first encoded data to the second terminal device through the transmission module. The transmission module may be a wired communication module, or may be a mobile communication module, or may be a wireless communication module. That is, the first application can use wired communication methods such as USB, or mobile communication methods such as 2G/3G/4G/5G/6G, or wireless communication methods such as Bluetooth, Wi-Fi, Wi-Fi p2p, and UWB. The first encoded data is sent to the second terminal device.
S806、SurfaceFlinger组件根据第二部分进行图像合成,得到第二图像。S806, the SurfaceFlinger component performs image synthesis according to the second part to obtain the second image.
其中,SurfaceFlinger组件根据第二部分进行图像合成的过程与SurfaceFlinger组件根据第一部分进行图像合成的过程类似,具体可以参照前述根据第一部分进行图像合成的过程,在此不再赘述。示例性的,与第一图像类似,第二图像可以为YUV格式的图像,或者可以为RGB格式的图像。Among them, the process of image synthesis by the SurfaceFlinger component based on the second part is similar to the process of image synthesis by the SurfaceFlinger component based on the first part. For details, please refer to the aforementioned process of image synthesis based on the first part, which will not be described again here. For example, similar to the first image, the second image may be an image in YUV format, or may be an image in RGB format.
可选的,SurfaceFlinger组件根据第一部分进行图像合成,得到第一图像后,可以继续根据第二部分进行图像合成,同时SurfaceFlinger组件可以将第一图像发送给编码器进行编码,即SurfaceFlinger组件根据第二部分进行图像合成的过程与编码器根据第一图像进行编码的过程可以并行执行,以降低投屏时延。Optionally, the SurfaceFlinger component performs image synthesis based on the first part. After obtaining the first image, it can continue to perform image synthesis based on the second part. At the same time, the SurfaceFlinger component can send the first image to the encoder for encoding, that is, the SurfaceFlinger component can perform image synthesis based on the second part. Part of the image synthesis process and the encoding process of the encoder based on the first image can be executed in parallel to reduce the screen projection delay.
S807、编码器根据第二图像进行编码,得到第二编码数据。S807. The encoder performs encoding according to the second image to obtain second encoded data.
可选的,SurfaceFlinger组件根据第二部分进行图像合成,得到第二图像后,可以将第二图像发送给编码器,以使得编码器可以根据第二图像进行编码,得到第二编码数据。Optionally, the SurfaceFlinger component performs image synthesis according to the second part. After obtaining the second image, the second image can be sent to the encoder, so that the encoder can encode according to the second image and obtain the second encoded data.
其中,编码器根据第二图像进行编码的过程与编码器根据第一图像进行编码的过程类似,具体可以参照前述根据第一图像进行编码的过程,在此不再赘述。The encoding process by the encoder based on the second image is similar to the encoding process by the encoder based on the first image. For details, reference may be made to the aforementioned encoding process based on the first image, which will not be described again here.
示例性的,与第一编码数据类似,第二编码数据可以H.264格式的视频流,或者可以为H.265格式的视频流。Exemplarily, similar to the first encoded data, the second encoded data may be a video stream in H.264 format, or may be a video stream in H.265 format.
S808、第一应用向第二终端设备发送第二编码数据。S808. The first application sends the second encoded data to the second terminal device.
可选的,编码器根据第二图像进行编码,得到第二编码数据后,可以将第二编码数据发送给第一应用,以使得第一应用将第二编码数据发送给第二终端设备。 Optionally, the encoder performs encoding according to the second image, and after obtaining the second encoded data, the second encoded data can be sent to the first application, so that the first application sends the second encoded data to the second terminal device.
示例性的,与第一编码数据类似,第一应用可以通过传输模块将第二编码数据发送给第二终端设备。传输模块可以为有线通信模块,或者可以为移动通信模块,或者可以为无线通信模块。即第一应用可以通过USB等有线通信方式,或者可以通过2G/3G/4G/5G/6G等移动通信方式,或者可以通过蓝牙、Wi-Fi、Wi-Fi p2p、UWB等无线通信方式,将第二编码数据发送给第二终端设备。Exemplarily, similar to the first encoded data, the first application may send the second encoded data to the second terminal device through the transmission module. The transmission module may be a wired communication module, or may be a mobile communication module, or may be a wireless communication module. That is, the first application can use wired communication methods such as USB, or mobile communication methods such as 2G/3G/4G/5G/6G, or wireless communication methods such as Bluetooth, Wi-Fi, Wi-Fi p2p, and UWB. The second encoded data is sent to the second terminal device.
由上述可知,SurfaceFlinger组件可以根据图层数据中的第一部分和第二部分分别进行图像合成,编码器可以根据第一图像和第二图像分别进行编码,第一应用则可以对第一编码数据和第二编码数据分别进行发送,以此实现合成、编码、发送三者的并行执行,从而降低投屏的时延。It can be seen from the above that the SurfaceFlinger component can perform image synthesis based on the first part and the second part of the layer data respectively, the encoder can perform encoding according to the first image and the second image respectively, and the first application can perform encoding on the first encoded data and the second part. The second encoded data is sent separately to achieve parallel execution of synthesis, encoding, and sending, thereby reducing the screen projection delay.
其中,SurfaceFlinger组件分别根据第一部分和第二部分进行图像合成的过程,编码器分别根据第一图像和第二图像进行编码的过程,以及第一应用分别对第一编码数据和第二编码数据进行发送的过程可以参照前述描述,在此不再赘述,例如可以参照图4所示的并行方式进行合成、编码与发送等处理。Among them, the SurfaceFlinger component performs the process of image synthesis according to the first part and the second part respectively, the encoder performs the process of encoding according to the first image and the second image respectively, and the first application performs the process of encoding the first encoded data and the second encoded data respectively. The process of sending may refer to the foregoing description and will not be repeated here. For example, the process of synthesis, encoding, and sending may be performed in a parallel manner as shown in FIG. 4 .
即本申请实施例中,在编码器根据第一图像进行编码和/或在第一应用对第一编码数据进行发送时,SurfaceFlinger组件还可以继续根据第二部分进行图像合成,得到第二图像,并可以继续向编码器发送第二图像。编码器接收到第二图像后,可以继续根据第二图像进行编码,得到第二编码数据,并可以向第一应用发送第二编码数据。第一应用接收到第二编码数据后,可以通过传输模块向第二终端设备发送第二编码数据。That is, in the embodiment of the present application, when the encoder encodes according to the first image and/or when the first application sends the first encoded data, the SurfaceFlinger component can continue to perform image synthesis according to the second part to obtain the second image. and can continue sending the second image to the encoder. After receiving the second image, the encoder can continue to encode according to the second image to obtain the second encoded data, and can send the second encoded data to the first application. After receiving the second encoded data, the first application can send the second encoded data to the second terminal device through the transmission module.
也就是说,SurfaceFlinger组件根据第二部分进行图像合成的过程与编码器根据第一图像进行编码的过程可以并行执行,或者SurfaceFlinger组件根据第二部分进行图像合成的过程与第一应用对第一编码数据进行发送的过程可以并行执行,或者编码器根据第二图像进行编码的过程可以与第一应用对第一编码数据进行发送的过程可以并行执行,以有效降低投屏的时延,提升用户体验。That is to say, the process of image synthesis by the SurfaceFlinger component based on the second part and the process of encoding by the encoder based on the first image can be executed in parallel, or the process of image synthesis by the SurfaceFlinger component based on the second part and the first encoding of the first image by the first application can be executed in parallel. The process of sending data can be executed in parallel, or the process of the encoder encoding according to the second image can be executed in parallel with the process of the first application sending the first encoded data, so as to effectively reduce the screen projection delay and improve the user experience. .
S809、第二终端设备对第一编码数据和第二编码数据进行解码,得到第一图像和第二图像,并对第一图像和第二图像进行拼接后显示。S809. The second terminal device decodes the first encoded data and the second encoded data, obtains the first image and the second image, and displays the first image and the second image after splicing.
示例性的,在编码器根据第一图像或第二图像进行编码时,可以增加对应的划分信息。因此,第二终端设备解码得到第一图像和第二图像后,可以根据划分信息对第一图像和第二图像进行准确拼接,并对拼接后的图像进行显示。For example, when the encoder performs encoding according to the first image or the second image, corresponding division information may be added. Therefore, after the second terminal device decodes the first image and the second image, it can accurately splice the first image and the second image according to the division information, and display the spliced image.
其中,划分信息用于描述图层数据对应的划分方式和该图像(即第一图像或第二图像等)在待投屏界面中的位置信息。示例性的,划分信息中可以包括划分方式、图像序号和图像总数等。图像序号用于描述各图像在待投屏界面中的位置信息。图像总数用于描述将待投屏界面分割成多少张图像(或者也可以称为区域)。The division information is used to describe the division method corresponding to the layer data and the position information of the image (ie, the first image or the second image, etc.) in the interface to be projected. For example, the division information may include division mode, image serial number, total number of images, etc. The image serial number is used to describe the position information of each image in the interface to be projected. The total number of images is used to describe how many images (or regions) the interface to be projected is divided into.
应理解,划分信息的具体内容和发送方式等可以参照前述有关划分信息的具体描述,在此不再赘述。It should be understood that the specific content and sending method of the division information may refer to the foregoing specific description of the division information, and will not be described again here.
在一种可能的实现方式中,在投屏过程中,第一终端设备可以实时获取第一终端设备当前的显示状态。当第一终端设备当前的显示状态为熄屏状态时,第一终端设备可以指示SurfaceFlinger组件不需要等待Vsync信号。因此,SurfaceFlinger组件获取图层数据后,可以直接根据第一部分进行图像合成,并在完成第一部分的图像合成后,直接根据第二部分进行图像合成,而不需要在接收到Vsync信号,才开始对第一部分进行图像合成,以减少SurfaceFlinger组件的等待时间,降低投屏的时延。In a possible implementation, during the screen casting process, the first terminal device can obtain the current display status of the first terminal device in real time. When the current display state of the first terminal device is the screen-off state, the first terminal device may instruct the SurfaceFlinger component not to wait for the Vsync signal. Therefore, after the SurfaceFlinger component obtains the layer data, it can directly perform image synthesis based on the first part, and after completing the first part's image synthesis, it can directly perform image synthesis based on the second part, without receiving the Vsync signal before starting the image synthesis. The first part performs image synthesis to reduce the waiting time of the SurfaceFlinger component and reduce the delay of screen projection.
也就是说,当第一终端设备当前的显示状态为熄屏状态时,第一终端设备可以按照图7所示的投屏方法来对待投屏界面进行投屏,具体内容可以参照图7对应的实施例,在此不再赘述。That is to say, when the current display state of the first terminal device is the screen-off state, the first terminal device can cast the screen to be projected according to the screen casting method shown in Figure 7. For specific content, please refer to the corresponding part of Figure 7. The embodiments will not be described again here.
本申请实施例中,可以将图层数据至少划分为第一部分和第二部分,并可以分别根据第一部分 和第二部分进行合成、编码和发送处理,且根据第二部分进行的合成、编码、发送等处理过程与根据第一部分对应的第一图像进行的编码、发送等处理过程可以并行执行,以实现投屏时,图像合成模块执行的合成过程、编码器执行的编码过程和第一应用执行的发送过程三者的并行执行,从而有效降低投屏的时延,提升用户的投屏体验。In the embodiment of the present application, the layer data can be divided into at least a first part and a second part, and the layer data can be divided into and the second part are synthesized, encoded and transmitted, and the synthesis, encoding, transmission and other processing according to the second part and the encoding, transmission and other processing according to the first image corresponding to the first part can be executed in parallel to achieve During screencasting, the synthesis process performed by the image synthesis module, the encoding process performed by the encoder, and the sending process performed by the first application are executed in parallel, thereby effectively reducing the screencasting delay and improving the user's screencasting experience.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the sequence number of each step in the above embodiment does not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
对应于上文实施例所述的投屏方法,本申请实施例还提供了一种投屏装置,该装置的各个模块可以对应实现投屏方法的各个步骤。Corresponding to the screen projection method described in the above embodiments, embodiments of the present application also provide a screen projection device, and each module of the device can correspond to each step of the screen projection method.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction, execution process, etc. between the above-mentioned devices/units are based on the same concept as the method embodiments of the present application. For details of their specific functions and technical effects, please refer to the method embodiments section. No further details will be given.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述***中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and simplicity of description, only the division of the above functional units and modules is used as an example. In actual applications, the above functions can be allocated to different functional units and modules according to needs. Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit. The above-mentioned integrated unit can be hardware-based. It can also be implemented in the form of software functional units. In addition, the specific names of each functional unit and module are only for the convenience of distinguishing each other and are not used to limit the scope of protection of the present application. For the specific working processes of the units and modules in the above system, please refer to the corresponding processes in the foregoing method embodiments, and will not be described again here.
本申请实施例还提供了一种终端设备,所述终端设备包括至少一个存储器、至少一个处理器以及存储在所述至少一个存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时,使所述终端设备实现上述任意各个方法实施例中的步骤。示例性的,所述终端设备的结构可以如图1所示。An embodiment of the present application also provides a terminal device, which includes at least one memory, at least one processor, and a computer program stored in the at least one memory and executable on the at least one processor, so When the processor executes the computer program, the terminal device implements the steps in any of the above method embodiments. By way of example, the structure of the terminal device may be as shown in Figure 1.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被计算机执行时,使所述计算机实现上述任意各个方法实施例中的步骤。Embodiments of the present application also provide a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is executed by a computer, it causes the computer to implement the steps in any of the above method embodiments. .
本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备实现上述任意各个方法实施例中的步骤。Embodiments of the present application provide a computer program product. When the computer program product is run on a terminal device, the terminal device implements the steps in any of the above method embodiments.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质至少可以包括:能够将计算机程序代码携带到装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读存储介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, this application can implement all or part of the processes in the methods of the above embodiments by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. The computer program When executed by a processor, the steps of each of the above method embodiments may be implemented. Wherein, the computer program includes computer program code, which may be in the form of source code, object code, executable file or some intermediate form. The computer-readable storage medium may at least include: any entity or device capable of carrying computer program code to a device/terminal device, a recording medium, a computer memory, a read-only memory (ROM), or a random access memory. (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. For example, U disk, mobile hard disk, magnetic disk or CD, etc. In some jurisdictions, due to legislation and patent practice, computer-readable storage media may not be electrical carrier signals and telecommunications signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above embodiments, each embodiment is described with its own emphasis. For parts that are not detailed or documented in a certain embodiment, please refer to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤, 能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art may realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein, It can be implemented with electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each specific application, but such implementations should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/terminal equipment and methods can be implemented in other ways. For example, the device/terminal equipment embodiments described above are only illustrative. For example, the division of modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components can be combined or can be integrated into another system, or some features can be omitted, or not implemented. On the other hand, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。 The above-described embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that they can still implement the above-mentioned implementations. The technical solutions described in the examples are modified, or some of the technical features are equivalently replaced; and these modifications or substitutions do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions in the embodiments of this application, and should be included in within the protection scope of this application.

Claims (18)

  1. 一种投屏***,其特征在于,包括第一终端设备和第二终端设备;A screen projection system, characterized by including a first terminal device and a second terminal device;
    所述第一终端设备,用于在检测到投屏指令时,对待投屏界面进行图层绘制,得到所述待投屏界面对应的图层数据;The first terminal device is configured to perform layer drawing on the interface to be projected when detecting a screen casting instruction, and obtain layer data corresponding to the interface to be projected;
    所述第一终端设备,还用于根据所述图层数据的第一部分进行图像合成,得到第一图像;The first terminal device is also configured to perform image synthesis according to the first part of the layer data to obtain a first image;
    所述第一终端设备,还用于根据所述第一图像进行编码,得到第一编码数据,并向所述第二终端设备发送所述第一编码数据;The first terminal device is further configured to perform encoding according to the first image, obtain first encoded data, and send the first encoded data to the second terminal device;
    所述第一终端设备,还用于在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,并根据所述第二图像进行编码,得到第二编码数据;The first terminal device is also configured to perform encoding according to the second part of the layer data when encoding according to the first image, or when sending the first encoding data to the second terminal device. Image synthesis to obtain a second image, and encoding according to the second image to obtain second encoded data;
    所述第一终端设备,还用于向所述第二终端设备发送所述第二编码数据;The first terminal device is also configured to send the second encoded data to the second terminal device;
    所述第二终端设备,用于获取所述第一编码数据和所述第二编码数据,并对所述第一编码数据和所述第二编码数据进行解码,得到所述第一图像和所述第二图像;The second terminal device is used to obtain the first encoded data and the second encoded data, and decode the first encoded data and the second encoded data to obtain the first image and the second encoded data. Describing the second image;
    所述第二终端设备,还用于根据所述第一图像和所述第二图像,得到所述待投屏界面,并对所述待投屏界面进行显示。The second terminal device is further configured to obtain the interface to be projected based on the first image and the second image, and display the interface to be projected.
  2. 根据权利要求1所述的***,其特征在于,所述第一终端设备,还用于响应于所述图层数据,根据所述图层数据的第一部分进行图像合成,得到所述第一图像。The system according to claim 1, wherein the first terminal device is further configured to perform image synthesis according to the first part of the layer data in response to the layer data to obtain the first image. .
  3. 根据权利要求1或2所述的***,其特征在于,所述第一终端设备,还用于确定所述第一图像对应的第一划分信息,并根据所述第一图像和所述第一划分信息进行编码,得到所述第一编码数据,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数。The system according to claim 1 or 2, characterized in that the first terminal device is further configured to determine the first division information corresponding to the first image, and determine the first division information corresponding to the first image and the first The division information is encoded to obtain the first encoded data. The first division information includes the division method corresponding to the layer data, the image serial number corresponding to the first image, and the total number of images corresponding to the layer data.
  4. 根据权利要求3所述的***,其特征在于,所述第二终端设备,还用于根据所述第一图像对应的第一划分信息和所述第二图像对应的第二划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。The system according to claim 3, characterized in that the second terminal device is further configured to divide the first division information corresponding to the first image and the second division information corresponding to the second image. The first image and the second image are spliced to obtain the interface to be projected.
  5. 一种投屏方法,其特征在于,应用于第一终端设备,所述方法包括:A screen projection method, characterized in that it is applied to a first terminal device, and the method includes:
    在检测到投屏指令时,对待投屏界面进行图层绘制,得到所述待投屏界面对应的图层数据;When a screen projection instruction is detected, layer drawing is performed on the interface to be projected to obtain the layer data corresponding to the interface to be projected;
    根据所述图层数据的第一部分进行图像合成,得到第一图像;Perform image synthesis according to the first part of the layer data to obtain a first image;
    根据所述第一图像进行编码,得到第一编码数据;Perform encoding according to the first image to obtain first encoded data;
    向第二终端设备发送所述第一编码数据;Send the first encoded data to the second terminal device;
    在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,并根据所述第二图像进行编码,得到第二编码数据;When encoding according to the first image, or when sending the first encoded data to the second terminal device, image synthesis is performed according to the second part of the layer data to obtain a second image, and the second image is obtained according to the second part of the layer data. The second image is encoded to obtain second encoded data;
    向所述第二终端设备发送所述第二编码数据。Send the second encoded data to the second terminal device.
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述图层数据的第一部分进行图像合成,得到第一图像,包括:The method according to claim 5, characterized in that, performing image synthesis according to the first part of the layer data to obtain the first image includes:
    通过图像合成模块根据所述第一部分进行图像合成,得到所述第一图像;The image synthesis module performs image synthesis according to the first part to obtain the first image;
    所述根据所述第一图像进行编码,得到第一编码数据,包括:The encoding according to the first image to obtain the first encoded data includes:
    通过编码器根据所述第一图像进行编码,得到所述第一编码数据;The encoder performs encoding according to the first image to obtain the first encoded data;
    所述向第二终端设备发送所述第一编码数据,包括:The sending of the first encoded data to the second terminal device includes:
    通过第一应用向所述第二终端设备发送所述第一编码数据。 The first encoded data is sent to the second terminal device through the first application.
  7. 根据权利要求6所述的方法,其特征在于,所述在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,包括:The method according to claim 6, characterized in that when encoding according to the first image or when sending the first encoding data to the second terminal device, according to the layer data The second part performs image synthesis to obtain the second image, including:
    在所述编码器根据所述第一图像进行编码时,所述图像合成模块根据所述第二部分进行图像合成,得到所述第二图像。When the encoder performs encoding according to the first image, the image synthesis module performs image synthesis according to the second part to obtain the second image.
  8. 根据权利要求6或7所述的方法,其特征在于,所述在根据所述第一图像进行编码时,或者在向所述第二终端设备发送所述第一编码数据时,根据所述图层数据的第二部分进行图像合成,得到第二图像,包括:The method according to claim 6 or 7, characterized in that when encoding according to the first image, or when sending the first encoded data to the second terminal device, according to the image The second part of the layer data is image synthesized to obtain the second image, including:
    在所述第一应用向所述第二终端设备发送所述第一编码数据时,所述图像合成模块根据所述第二部分进行图像合成,得到所述第二图像。When the first application sends the first encoded data to the second terminal device, the image synthesis module performs image synthesis according to the second part to obtain the second image.
  9. 根据权利要求6至8中任一项所述的方法,其特征在于,所述根据所述第二图像进行编码,得到第二编码数据,包括:The method according to any one of claims 6 to 8, characterized in that said encoding according to the second image to obtain second encoded data includes:
    在所述第一应用向所述第二终端设备发送所述第一编码数据时,所述编码器根据所述第二图像进行编码,得到所述第二编码数据。When the first application sends the first encoded data to the second terminal device, the encoder performs encoding according to the second image to obtain the second encoded data.
  10. 根据权利要求5至9中任一项所述的方法,其特征在于,所述根据所述图层数据的第一部分进行图像合成,得到第一图像,包括:The method according to any one of claims 5 to 9, characterized in that, performing image synthesis according to the first part of the layer data to obtain the first image includes:
    响应于所述图层数据,通过图像合成模块根据所述图层数据的第一部分进行图像合成,得到所述第一图像。In response to the layer data, the image synthesis module performs image synthesis according to the first part of the layer data to obtain the first image.
  11. 根据权利要求5至10中任一项所述的方法,其特征在于,所述根据所述第一图像进行编码,得到第一编码数据,包括:The method according to any one of claims 5 to 10, characterized in that said encoding according to the first image to obtain the first encoded data includes:
    确定所述第一图像对应的第一划分信息,并根据所述第一图像和所述第一划分信息进行编码,得到所述第一编码数据,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数。Determine the first division information corresponding to the first image, and perform encoding according to the first image and the first division information to obtain the first encoded data, where the first division information includes the layer data The corresponding division method, the image serial number corresponding to the first image and the total number of images corresponding to the layer data.
  12. 根据权利要求5至10中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 5 to 10, characterized in that the method further includes:
    确定所述图层数据对应的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;Determine third division information corresponding to the layer data, where the third division information includes the division method, the total number of images, and the image transmission method corresponding to the layer data;
    向所述第二终端设备发送所述第三划分信息。Send the third division information to the second terminal device.
  13. 一种投屏方法,其特征在于,应用于第二终端设备,所述方法包括:A screen projection method, characterized in that it is applied to a second terminal device, and the method includes:
    获取第一终端设备分别发送的第一编码数据和第二编码数据,所述第一编码数据为图层数据的第一部分对应的编码数据,所述第二编码数据为所述图层数据的第二部分对应的编码数据,所述图层数据为所述第一终端设备的待投屏界面对应的图层数据;Obtain the first encoded data and the second encoded data respectively sent by the first terminal device. The first encoded data is the encoded data corresponding to the first part of the layer data, and the second encoded data is the third part of the layer data. The two parts correspond to the encoded data, and the layer data is the layer data corresponding to the interface to be projected on the first terminal device;
    分别对所述第一编码数据和所述第二编码数据进行解码,得到第一图像和第二图像;Decoding the first encoded data and the second encoded data respectively to obtain a first image and a second image;
    根据所述第一图像和所述第二图像,得到所述待投屏界面,并对所述待投屏界面进行显示。According to the first image and the second image, the interface to be projected is obtained, and the interface to be projected is displayed.
  14. 根据权利要求13所述的方法,其特征在于,所述分别对所述第一编码数据和所述第二编码数据进行解码,得到第一图像和第二图像,包括:The method of claim 13, wherein decoding the first encoded data and the second encoded data respectively to obtain the first image and the second image includes:
    对所述第一编码数据进行解码,得到所述第一图像和所述第一图像对应的第一划分信息,所述第一划分信息包括所述图层数据对应的划分方式、所述第一图像对应的图像序号和所述图层数据对应的图像总数;The first encoded data is decoded to obtain the first image and first division information corresponding to the first image. The first division information includes a division method corresponding to the layer data, the first division information corresponding to the first image. The image serial number corresponding to the image and the total number of images corresponding to the layer data;
    对所述第二编码数据进行解码,得到所述第二图像和所述第二图像对应的第二划分信息,所述第二划分信息包括所述图层数据对应的划分方式、所述第二图像对应的图像序号和所述图层数据对应的图像总数。The second encoded data is decoded to obtain the second image and second division information corresponding to the second image. The second division information includes a division method corresponding to the layer data, the second division information corresponding to the second image, and the second division information corresponding to the second image. The image serial number corresponding to the image and the total number of images corresponding to the layer data.
  15. 根据权利要求14所述的方法,其特征在于,所述根据所述第一图像和所述第二图像,得到所述待 投屏界面,包括:The method according to claim 14, characterized in that, according to the first image and the second image, the method to be obtained is Screen casting interface, including:
    根据所述第一划分信息和所述第二划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。The first image and the second image are spliced according to the first division information and the second division information to obtain the interface to be projected.
  16. 根据权利要求13所述的方法,其特征在于,所述方法还包括:The method of claim 13, further comprising:
    获取所述第一终端设备发送的第三划分信息,所述第三划分信息包括所述图层数据对应的划分方式、图像总数和图像发送方式;Obtain third division information sent by the first terminal device, where the third division information includes the division method corresponding to the layer data, the total number of images, and the image transmission method;
    所述根据所述第一图像和所述第二图像,得到所述待投屏界面,包括:Obtaining the interface to be projected based on the first image and the second image includes:
    根据所述第三划分信息对所述第一图像和所述第二图像进行拼接,得到所述待投屏界面。The first image and the second image are spliced according to the third division information to obtain the interface to be projected.
  17. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,使所述终端设备实现如权利要求5至12中任一项所述的投屏方法,或者实现如权利要求13至16中任一项所述的投屏方法。A terminal device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized in that when the processor executes the computer program, the terminal device Implement the screen projection method as described in any one of claims 5 to 12, or implement the screen projection method as described in any one of claims 13 to 16.
  18. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被计算机执行时,使所述计算机实现如权利要求5至12中任一项所述的投屏方法,或者实现如权利要求13至16中任一项的投屏方法。 A computer-readable storage medium, the computer-readable storage medium stores a computer program, characterized in that, when the computer program is executed by a computer, the computer implements the method described in any one of claims 5 to 12 The screen projection method, or implement the screen projection method according to any one of claims 13 to 16.
PCT/CN2023/078992 2022-03-11 2023-03-01 Screen projection method, terminal device, and computer-readable storage medium WO2023169276A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210254483.2A CN116781968A (en) 2022-03-11 2022-03-11 Screen projection method, terminal equipment and computer readable storage medium
CN202210254483.2 2022-03-11

Publications (1)

Publication Number Publication Date
WO2023169276A1 true WO2023169276A1 (en) 2023-09-14

Family

ID=87937206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/078992 WO2023169276A1 (en) 2022-03-11 2023-03-01 Screen projection method, terminal device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN116781968A (en)
WO (1) WO2023169276A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833932A (en) * 2018-07-19 2018-11-16 湖南君瀚信息技术有限公司 A kind of method and system for realizing the ultralow delay encoding and decoding of HD video and transmission
CN110865782A (en) * 2019-09-29 2020-03-06 华为终端有限公司 Data transmission method, device and equipment
CN111831242A (en) * 2019-04-23 2020-10-27 阿里巴巴集团控股有限公司 Information display method, screen projection end, display end, storage medium and system
CN113316028A (en) * 2020-02-27 2021-08-27 华为技术有限公司 Screen projection method, screen projection equipment and storage medium
JP2021525470A (en) * 2018-06-06 2021-09-24 キヤノン株式会社 Methods, devices and computer programs for transmitting media content
WO2021233218A1 (en) * 2020-05-19 2021-11-25 华为技术有限公司 Screen casting method, screen casting source end, screen casting destination end, screen casting system and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021525470A (en) * 2018-06-06 2021-09-24 キヤノン株式会社 Methods, devices and computer programs for transmitting media content
CN108833932A (en) * 2018-07-19 2018-11-16 湖南君瀚信息技术有限公司 A kind of method and system for realizing the ultralow delay encoding and decoding of HD video and transmission
CN111831242A (en) * 2019-04-23 2020-10-27 阿里巴巴集团控股有限公司 Information display method, screen projection end, display end, storage medium and system
CN110865782A (en) * 2019-09-29 2020-03-06 华为终端有限公司 Data transmission method, device and equipment
CN113316028A (en) * 2020-02-27 2021-08-27 华为技术有限公司 Screen projection method, screen projection equipment and storage medium
WO2021233218A1 (en) * 2020-05-19 2021-11-25 华为技术有限公司 Screen casting method, screen casting source end, screen casting destination end, screen casting system and storage medium

Also Published As

Publication number Publication date
CN116781968A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
US11567623B2 (en) Displaying interfaces in different display areas based on activities
WO2021027747A1 (en) Interface display method and device
US20230419570A1 (en) Image Processing Method and Electronic Device
WO2021129253A1 (en) Method for displaying multiple windows, and electronic device and system
EP4060475A1 (en) Multi-screen cooperation method and system, and electronic device
WO2020093988A1 (en) Image processing method and electronic device
CN115631258B (en) Image processing method and electronic equipment
WO2022105445A1 (en) Browser-based application screen projection method and related apparatus
WO2022143082A1 (en) Image processing method and electronic device
WO2020155875A1 (en) Display method for electronic device, graphic user interface and electronic device
WO2022083465A1 (en) Electronic device screen projection method, medium thereof, and electronic device
WO2024001810A1 (en) Device interaction method, electronic device and computer-readable storage medium
WO2023065812A1 (en) Page display method, electronic device, and computer-readable storage medium
CN112437341B (en) Video stream processing method and electronic equipment
WO2023103800A1 (en) Drawing method and electronic device
WO2023005751A1 (en) Rendering method and electronic device
WO2022252816A1 (en) Display method and electronic device
EP4287605A1 (en) Working mode switching control method, electronic device, and readable storage medium
WO2023169276A1 (en) Screen projection method, terminal device, and computer-readable storage medium
WO2024051634A1 (en) Screen projection display method and system, and electronic device
CN116204093B (en) Page display method and electronic equipment
WO2022252805A1 (en) Display method and electronic device
WO2024067599A1 (en) Application display method and electronic device
WO2024055875A1 (en) Method for adding service card, and electronic device and computer-readable storage medium
CN116743908B (en) Wallpaper display method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23765854

Country of ref document: EP

Kind code of ref document: A1