CN111866571B - Method and device for editing content on smart television and storage medium - Google Patents

Method and device for editing content on smart television and storage medium Download PDF

Info

Publication number
CN111866571B
CN111866571B CN202010622171.3A CN202010622171A CN111866571B CN 111866571 B CN111866571 B CN 111866571B CN 202010622171 A CN202010622171 A CN 202010622171A CN 111866571 B CN111866571 B CN 111866571B
Authority
CN
China
Prior art keywords
editing
content
frame
interface
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010622171.3A
Other languages
Chinese (zh)
Other versions
CN111866571A (en
Inventor
李政
马璇
王博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010622171.3A priority Critical patent/CN111866571B/en
Publication of CN111866571A publication Critical patent/CN111866571A/en
Application granted granted Critical
Publication of CN111866571B publication Critical patent/CN111866571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4786Supplemental services, e.g. displaying phone caller identification, shopping application e-mailing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The disclosure relates to a method, a device and a storage medium for editing contents on a smart television, wherein the method is applied to a mobile terminal, and the method for editing the contents on the smart television comprises the following steps: acquiring an editing interface calling instruction sent by the smart television, wherein the editing interface calling instruction is sent by the smart television in response to the first content editing frame; displaying an editing interface, wherein the editing interface comprises a second content editing frame; user-edited content is received in the second content edit box and displayed in the first content edit box. Through the embodiment of the disclosure, the input operation of the text content on the smart television is facilitated, and the user experience is improved.

Description

Method and device for editing content on smart television and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a method and an apparatus for editing content on a smart television, and a storage medium.
Background
With the development of television technology, the application of smart televisions is more and more extensive, and the functions realized are more and more diversified. For example, smart televisions may provide functionality such as program searching, web browsing, instant chat, and the like. However, implementing the above functions by the smart tv sometimes requires the user to input text content.
In the related art, content is edited on the smart television, usually a virtual keyboard is displayed on a display interface of the smart television, and a user selects characters to be input on the virtual keyboard by controlling a remote controller, so as to realize input of text content. The input method on the television is controlled through the remote controller, a user needs to frequently press a direction key and a confirmation key of the remote controller to complete text input in an input box, the text input is inconvenient, and the user experience is poor.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a method, an apparatus, and a storage medium for editing content on a smart tv.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for editing content on a smart television, which is applied to a mobile terminal, and the method for editing content on the smart television includes: acquiring an editing interface calling instruction sent by the smart television, wherein the editing interface calling instruction is sent by the smart television in response to the first content editing frame; displaying an editing interface, wherein the editing interface comprises a second content editing frame; and receiving user editing content in the second content editing frame and displaying in the first content editing frame.
In one embodiment, receiving user-edited content in the second content edit box and displaying in the first content edit box includes: receiving user editing content in the second content editing frame and synchronously displaying in the first content editing frame; or receiving user editing content in the second content editing frame, and displaying the edited final content in the first content editing frame.
In one embodiment, receiving user-edited content in the second content edit box and synchronously displaying in the first content edit box includes: responding to an editing instruction of a user received in the second content editing frame, and correspondingly displaying the edited content in the second content editing frame according to the editing instruction; and sending the editing instruction to the smart television, and correspondingly displaying the editing content in the first content editing frame.
In one embodiment, the editing content includes characters and/or graphics.
In one embodiment, the editing interface calling instruction includes attribute information of the first content edit box; displaying an editing interface, wherein the editing interface comprises a second content editing frame and comprises: determining and displaying an editing interface according to the attribute information, wherein the editing interface comprises a second content editing frame with the same attribute as the first content editing frame; the attribute information characterizes the first content edit box for text editing and/or for graphical editing.
In one embodiment, the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position; the method for editing the content on the smart television further comprises the following steps: and generating a control according to the control information, and generating an editing interface, wherein the editing interface comprises a second content editing frame and the generated control for performing editing operation in the second content editing frame.
In one embodiment, the method for editing content on the smart television further comprises: saving the editing interface; the display editing interface comprises: and calling and displaying the saved editing interface.
According to a second aspect of the embodiments of the present disclosure, there is provided a method for editing content on a smart television, which is applied to the smart television, and the method for editing content on the smart television includes: responding to the first content editing frame displayed, and sending an editing interface calling instruction to the mobile terminal; and displaying user editing content in the first content editing frame, wherein the user editing content is received in a second content editing frame of the editing interface of the mobile terminal.
In one embodiment, displaying user-edited content in the first content editing box comprises: and synchronously displaying the user editing content received in the second content editing frame in the first content editing frame.
In one embodiment, the editing interface calling instruction comprises attribute information of the first content editing box, and the attribute information represents that the first content editing box is used for text editing and/or graphic editing; and/or the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for editing content on a smart television, which is applied to a mobile terminal, the apparatus including: the intelligent television comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an editing interface calling instruction sent by the intelligent television, and the editing interface calling instruction is sent by the intelligent television in response to a first content editing frame; the receiving module is used for receiving the user editing content in the second content editing frame; and the display module is used for displaying an editing interface, wherein the editing interface comprises the second content editing frame and is displayed in the first content editing frame.
In one embodiment, the apparatus further comprises a synchronization module; wherein the content of the first and second substances,
the receiving module is used for receiving user editing content in the second content editing frame, and the synchronization module is used for synchronously displaying in the first content editing frame; or the receiving module is used for receiving user editing content in the second content editing frame, and the synchronizing module is used for displaying the edited final content in the first content editing frame.
In one embodiment, in response to the receiving module receiving an editing instruction of a user in the second content editing frame, the synchronization module correspondingly displays the edited content in the second content editing frame according to the editing instruction; and sending the editing instruction to the smart television, and correspondingly displaying the editing content in the first content editing frame.
In one embodiment, the editing content includes characters and/or graphics.
In one embodiment, the editing interface calling instruction includes attribute information of the first content edit box; the display module displays an editing interface in the following mode: determining and displaying an editing interface according to the attribute information, wherein the editing interface comprises a second content editing frame with the same attribute as the first content editing frame; the attribute information characterizes the first content edit box for text editing and/or for graphical editing.
In one embodiment, the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position; the device for editing content on the smart television further comprises: and the generating module is used for generating a control according to the control information and generating an editing interface, wherein the editing interface comprises a second content editing frame and the generated control for performing editing operation in the second content editing frame.
In one embodiment, the apparatus for editing content on a smart television further includes: the storage module is used for storing the editing interface; the display module adopts the following mode to edit the interface: and calling and displaying the saved editing interface.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an apparatus for editing content on a smart television, which is applied to the smart television, the apparatus including: the display module is used for displaying a first content editing frame and displaying user editing content in the first content editing frame, wherein the user editing content is received in a second content editing frame of an editing interface of the mobile terminal; and the sending module is used for responding to the display of the first content editing frame and sending an editing interface calling instruction to the mobile terminal.
In one embodiment, the display module displays the user-edited content in the first content edit box in the following manner: and synchronously displaying the user editing content received in the second content editing frame in the first content editing frame.
In one embodiment, the editing interface calling instruction comprises attribute information of the first content editing box, and the attribute information represents that the first content editing box is used for text editing and/or graphic editing; and/or the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an apparatus for editing content on a smart television, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the method for editing the content on the smart television is performed according to the first aspect or any one of the embodiments of the first aspect.
According to a sixth aspect of the embodiments of the present disclosure, a non-transitory computer-readable storage medium is provided, where instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method for editing content on a smart television set described in the first aspect or any one of the implementations of the first aspect.
According to a seventh aspect of the embodiments of the present disclosure, there is provided an apparatus for editing content on a smart television, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the method for editing content on the smart television is implemented according to the first aspect or any one of the implementation manners of the first aspect.
According to an eighth aspect of the embodiments of the present disclosure, a non-transitory computer-readable storage medium is provided, where instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method for editing content on a smart television set described in the first aspect or any one of the implementation manners of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: through the embodiment of the disclosure, the first content editing frame is displayed on the smart television. And when text content is input in the first content editing box, the intelligent television sends an editing interface calling instruction to a terminal for controlling the intelligent television. The editing page containing the second content editing frame is displayed at the terminal, a user can edit the content at the terminal, the edited content is displayed at the intelligent television, the operation of text content input on the intelligent television is facilitated, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a method of editing content on a smart tv according to an exemplary embodiment of the present disclosure.
Fig. 2a and 2b are schematic diagrams illustrating a method for editing content on a smart television according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment of the present disclosure.
Fig. 6 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram illustrating an apparatus for editing content on a smart tv according to an exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram illustrating an apparatus for editing content on a smart tv according to still another exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram illustrating an apparatus for editing content on a smart tv according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
With the development of television technology, the application of smart televisions is more and more extensive, and the functions realized are more and more diversified. For example, smart televisions may provide functionality such as program searching, web browsing, instant chat, and the like. The smart television for implementing the above functions sometimes requires a user to input text content. In the related art, content is edited on the smart television, usually a virtual keyboard is displayed on a display interface of the smart television, and a user selects characters to be input on the virtual keyboard by controlling a remote controller, so as to realize input of text content. The user controls the input method on the television through the operation of the remote controller, and the user is required to frequently press the direction key and the confirming key of the remote controller to complete the text input in the input box. The input is carried out by the remote controller one character by one character, each character needs to be input and selected for many times, and the operation of a user is inconvenient.
Therefore, the method for editing the content on the smart television can be used for sending an editing interface calling instruction to a terminal for controlling the smart television by the smart television when the content editing frame is displayed on the smart television, and displaying an editing page containing the content editing frame at the terminal. The user edits the content in the content editing frame displayed on the terminal, and the edited content is displayed in the content editing frame displayed on the smart television, so that the content editing operation of the smart television is facilitated, and the terminal experience of the user is improved.
For convenience of description, the content edit box displayed on the smart television is referred to as a first content edit box, and the content edit box displayed on the terminal is referred to as a second content edit box.
Fig. 1 is a flowchart illustrating a method for editing content on a smart television according to an exemplary embodiment, as shown in fig. 1, the method for editing content on a smart television is used in a terminal, and the terminal may be, for example, a smartphone, a tablet, a wearable device, and the like. The method for editing the content on the intelligent television comprises the following steps.
In step S101, an editing interface call instruction sent by the smart television is obtained, and the editing interface call instruction is sent by the smart television in response to displaying the first content editing frame.
In the embodiment of the disclosure, the smart television and the terminal controlling the smart television need to establish association. When the association between the intelligent television and the terminal is established, and the intelligent television and the terminal belong to the same local area network, the user can be prompted whether to establish the association with the current control terminal for the user to confirm when establishing the association for the first time. And the user confirms that the association is established, and when the intelligent television is directly associated with the terminal in the subsequent use process.
The intelligent television and the terminal for controlling the intelligent television can be connected in a wireless network connection mode, a Bluetooth connection mode and the like to realize quick association establishment.
In the embodiment of the disclosure, when the first content edit box is displayed on the smart television, it is represented that a user needs to edit contents such as a text on the smart television. When the smart television displays the first content editing frame for content editing, an editing interface calling instruction is sent to the terminal, so that calling of an editing interface is carried out at the terminal.
In step S102, an editing interface is displayed, and the editing interface includes a second content editing frame.
In the embodiment of the disclosure, the terminal receives an editing interface calling instruction sent by the smart television and displays an editing interface. And the user edits the content in the second content editing frame of the editing interface. It can be understood that the content edited by the user in the second content editing box at the terminal is the content that the user needs to edit on the smart television.
In step S103, the user-edited content is received in the second content edit box and displayed in the first content edit box.
The edited content in the embodiment of the present disclosure includes characters, may also include images, and may also be characters and graphics.
In the embodiment of the present disclosure, when the edited content includes characters, it may be understood that text content is input. In some embodiments of the present disclosure, the edited content includes characters, and a user inputs text content on the smart television.
According to the embodiment of the disclosure, when the first content edit box is displayed on the smart television, the smart television sends an edit interface calling instruction to the terminal controlling the smart television, and the edit page containing the second content edit box is displayed at the terminal. The user can edit the content in the second content edit box of the edit page at the terminal, so that the edited content is displayed at the intelligent television, the content editing operation of the intelligent television is facilitated, and the terminal experience of the user is improved.
In one implementation of the embodiment of the present disclosure, step S103 in fig. 1 receives the user-edited content in the second content editing frame and displays the user-edited content in the first content editing frame, which may be receiving the user-edited content in the second content editing frame and synchronously displaying the user-edited content in the first content editing frame.
In an embodiment of the disclosure, when the first content edit box is displayed on the smart television, it is characterized that the user needs to input text content on the smart television. And the intelligent television displays the first content editing frame to input text content, and sends an editing interface calling instruction to the terminal so as to display an editing interface comprising the content editing of the day at the terminal. And the user can conveniently edit the text in the second content editing frame of the terminal and complete the text input.
The user-edited content may be received in the second content editing frame and displayed in the first content editing frame, or the user-edited content may be received in the second content editing frame and displayed in synchronization in the first content editing frame. At this time, the first content editing frame can synchronously display the editing operation of the user on the second content editing frame, and the editing operation of the user on the second content editing frame at the terminal is displayed at the first content editing frame on the smart television in real time. For example, when a user inputs characters, the spelling process is displayed on the smart television in real time. For another example, when the user performs graphic drawing, the process of drawing is displayed on the smart television in real time.
In another implementation manner of the embodiment of the present disclosure, the user editing content is received in the second content editing frame and displayed in the first content editing frame, or the user editing content is received in the second content editing frame and the edited final content is displayed in the first content editing frame. At the moment, the user edits and inputs the text content in the second content editing frame at the terminal, and after the editing is finished, the final content is displayed in the first content editing frame on the smart television. For example, when a user inputs characters, the finally input characters are displayed on the smart television. For another example, when the user performs graphic drawing, the drawn graphics are displayed on the smart television.
According to the embodiment of the disclosure, the edited content of the user is received in the second content editing frame and synchronously displayed in the first content editing frame, so that the edited content can be displayed in real time, and the user can conveniently confirm the editing correctness on the display screen of the smart television in real time. The method comprises the steps of receiving user editing contents in the second content editing frame, displaying the edited final contents in the first content editing frame, facilitating the user to directly realize the editing operation of text contents at the terminal, avoiding the switching of the sight of the user between the intelligent television and the terminal in the text input process, and bringing convenient experience to the user.
Fig. 2a and 2b are schematic diagrams illustrating a method for editing content on a smart television according to an exemplary embodiment of the present disclosure.
Referring to fig. 2a, fig. 2a illustrates a user's operation of watching a tv show using an intelligent tv, and inputting characters' tv show 8230 \8230; '8230;' to implement a search of a tv show on the intelligent tv. The method has the advantages that a user inputs characters of ' TV play ' \8230; ' so that the content edited by the user on the mobile phone is displayed at the smart television, the content editing operation on the smart television is more convenient, and the terminal experience of the user is improved. Fig. 2a is a schematic diagram illustrating that the user edited content is received in the second content editing box of the mobile phone and the edited content is synchronously displayed in the first content editing box of the smart television in the embodiment of the present disclosure.
Fig. 2b shows that when the user uses the smart television, the user performs drawing operation, and the user draws a butterfly graphic in the first content edit box of the mobile phone, so that the drawing content of the user on the mobile phone is displayed at the smart television, and the drawing of the user on the mobile phone and the display of the drawn graphic on the smart television are realized.
Fig. 3 is a flowchart illustrating a method for editing content on a smart tv according to an exemplary embodiment, and as shown in fig. 3, step S103 in fig. 1 includes the following steps.
In step S1031, an editing instruction of the user is received in the second content editing frame, and the edited content is correspondingly displayed in the second content editing frame according to the editing instruction.
In the embodiment of the disclosure, when the user performs an operation of inputting text content on the smart television, a first content editing box is displayed on the smart television, and the text content input by the user is displayed in the first content editing box. And the terminal correspondingly displays the edited content in the second content editing box according to the user editing instruction.
In step S1032, the editing instruction is sent to the smart television, and the edited content is correspondingly displayed in the first content editing frame.
And the user inputs an editing instruction in a second content editing box of the terminal, edits the content input by the text, sends the editing instruction to the intelligent television and correspondingly displays the edited content in the first content editing box. The method includes the steps that editing contents of a user are received in a second content editing frame and displayed in a first content editing frame synchronously, the editing instructions of the user are received in the second content editing frame, meanwhile, the editing instructions are sent to the smart television, and the editing contents are displayed in the first content editing frame correspondingly.
And receiving the user editing content in the second content editing frame, when the edited final content is displayed in the first content editing frame, receiving the user editing content in the second content editing frame, when the final editing is finished, sending an editing instruction to the smart television, and correspondingly displaying the editing content in the first content editing frame.
In one embodiment, for example, the editing content includes characters and/or graphics. The user performs the operation of inputting the text content on the smart television by inputting the edited content at the terminal, and when the content needs to be edited on the smart television, the edited content can be characters. When graphical content needs to be input on the smart television, the edited content may be graphics. When the character content and the graphic content need to be displayed on the smart television, the edited content can be graphics and characters.
In one embodiment, the editing interface call instruction includes attribute information of the first content edit box. Fig. 4 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment, where the method of editing content on a smart tv, as illustrated in fig. 4, includes the following steps.
In step S301, an editing interface call instruction sent by the smart television is obtained, where the editing interface call instruction is sent by the smart television in response to displaying the first content editing box.
In step S302, an editing interface is determined according to the attribute information and displayed, and the editing interface includes a second content editing frame having the same attribute as the first content editing frame.
In an embodiment of the disclosure, the attribute information characterizes the first content edit box for text editing and/or for graphic editing.
In the embodiment of the disclosure, when a user needs to perform an input operation of text content on the smart television, the smart television sends an editing interface calling instruction. The editing interface calling instruction sent by the intelligent television comprises attribute information of a first content editing frame, and the terminal determines a second content editing frame with the same attribute as the first content editing frame according to the attribute information of the first content editing frame. For example, the attribute information of the first content edit box may be a text attribute or may be a graphic attribute. When the first content edit box is a text attribute, the second content edit box is also a text attribute. When the first content edit box is a graphic attribute, the second content edit box is also a graphic attribute. And determining an editing interface according to the attribute information of the second content editing frame, and displaying the editing interface. And the user edits the graphs or texts at an editing interface of the terminal so as to display the corresponding graph or text contents at the intelligent television.
In step S303, the user-edited content is received in the second content edit box and displayed in the first content edit box.
In the embodiment of the disclosure, when a user performs an operation of inputting text content on the smart television, a first content edit box is displayed on the smart television, and an edit interface call instruction is sent to the terminal, where the edit interface call instruction includes attribute information of the first content edit box. The attribute information characterizes the first content edit box for text editing and/or for graphic editing. At the terminal, the attribute information determines an editing interface including a second content editing frame having the same attribute as the first content editing frame and displays the editing interface. And editing the content in a second content editing frame of the editing page at the terminal by the user, displaying the same edited content at the intelligent television, and ensuring the consistency of the edited content displayed at the terminal and the intelligent television.
In one embodiment, the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame, and the control information at least comprises one or more of the following: control type, control number and control position.
In the embodiment of the present disclosure, the editing control for performing the editing operation may be at least one virtual key included in the content editing box for text input, for example, an alphabet key, a confirmation key, a switch key, a delete key, and the like. When a user inputs text content on the intelligent television, displaying a first content editing frame on the intelligent television, and sending an editing interface calling instruction to the terminal, wherein the editing interface calling instruction comprises control information of an editing control. And generating an editing interface comprising a second content editing box at the terminal according to the received control information, namely according to the control type, the control number, the control position and the like, so as to realize the input of the text content at the second content editing box at the terminal by the user.
Fig. 5 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment, where the method of editing content on a smart tv, as shown in fig. 5, includes the following steps.
In step S401, an editing interface call instruction sent by the smart television is obtained, where the editing interface call instruction is sent by the smart television in response to displaying the first content edit box.
In step S402, a control is generated according to the control information, and an editing interface is generated, where the editing interface includes a second content editing box and the generated control for performing an editing operation in the second content editing box.
In the embodiment of the disclosure, when a user performs an operation of inputting text content on the smart television, a first content editing box is displayed on the smart television, and an editing interface calling instruction is sent to the terminal, where the editing interface calling instruction includes control information of an editing control. And generating an editing interface comprising a second content editing frame at the terminal according to the received control information, namely according to the control type, the control number, the control position and the like.
In step S403, an editing interface is displayed, and the editing interface includes a second content editing frame.
In step S404, the user-edited content is received in the second content edit box and displayed in the first content edit box.
According to the embodiment of the disclosure, when the editing interface calling instruction editing interface is sent to the terminal on the smart television, the control information of the editing control is determined, the terminal generates the control for editing operation in the second content editing frame according to the control information, and content is edited on the smart television through the determined editing control.
Fig. 6 is a flowchart illustrating a method of editing content on a smart tv according to still another exemplary embodiment, where the method of editing content on a smart tv, as shown in fig. 6, includes the following steps.
In step S501, an editing interface call instruction sent by the smart television is obtained, where the editing interface call instruction is sent by the smart television in response to displaying the first content editing box.
In step S502, a control is generated according to the control information, and an editing interface is generated, where the editing interface includes a second content editing box and the generated control for performing an editing operation in the second content editing box.
In the embodiment of the disclosure, when a user performs an operation of inputting text content on the smart television, a first content editing box is displayed on the smart television, and an editing interface calling instruction is sent to the terminal, where the editing interface calling instruction includes control information of an editing control for performing an editing operation in the first content editing box. And at the terminal, generating a control according to the control information and generating an editing interface.
In step S503, the editing interface is saved.
In an implementation manner of the embodiment of the present disclosure, the terminal generates a control according to the control information and generates an editing interface. The generated editing interface is stored, and the editing interface matched with the intelligent television can be directly called in the subsequent use process, so that the calling speed of the editing interface at the terminal is increased.
It can be understood that when the content is edited on the smart television, the control can be generated according to the control information in real time when the editing interface call instruction sent by the smart television is obtained, and the editing interface is generated, so that the storage space is saved, and the call of the editing interface is reduced.
In step S504, the saved editing interface is called and displayed, and the editing interface includes a second content editing frame.
In the embodiment of the disclosure, the terminal receives an editing interface calling instruction sent by the smart television, calls the stored editing interface and displays the editing interface, and inputs text content in a second content editing box of the editing interface.
In step S505, the user-edited content is received in the second content edit box and displayed in the first content edit box.
According to the method and the device, the editing interface is generated according to the control information which is sent to the terminal by the intelligent television and used for determining the editing control, and the generated editing interface is stored, so that the editing interface matched with the intelligent television can be directly called in the subsequent using process, and the calling speed of the editing interface at the terminal is improved.
Fig. 7 is a flowchart illustrating a method for editing content on a smart tv according to an exemplary embodiment, where the method for editing content on a smart tv, as shown in fig. 7, is applied to a smart tv, and includes the following steps.
In step S601, in response to displaying the first content editing frame, an editing interface call instruction is sent to the mobile terminal.
In the embodiment of the disclosure, when the smart television displays the first content editing box for inputting the text content, an editing interface calling instruction is sent to the terminal so as to call the editing interface at the terminal.
In step S602, user editing content is displayed in a first content editing box, the user editing content being received in a second content editing box of the editing interface of the mobile terminal.
In the embodiment of the disclosure, the terminal receives an editing interface calling instruction sent by the smart television, displays an editing interface, and inputs text content in a second content editing box of the editing interface. It is understood that the text content input in the second content edit box displayed at the terminal is the text content that the user needs to input on the smart tv.
According to the embodiment of the disclosure, the edited content of the user is received in the second content editing frame and synchronously displayed in the first content editing frame, so that the edited content can be displayed in real time, and the user can conveniently confirm the editing correctness on the display screen of the smart television in real time. The method comprises the steps of receiving user editing contents in the second content editing frame, displaying the edited final contents in the first content editing frame, facilitating the user to directly realize the editing operation of text contents at the terminal, avoiding the switching of the sight of the user between the intelligent television and the terminal in the text input process, and bringing convenient experience to the user.
In one embodiment, user-edited content received in the second content edit box is displayed synchronously in the first content edit box.
In the embodiment of the disclosure, a user edits a text in the second content edit box of the terminal, finishes text input, and synchronously displays the edited content in the first content edit box at the smart television, so that text input operation is performed at the terminal, and the edited content is displayed at the smart television, thereby facilitating content editing operation performed on the smart television and improving terminal experience of the user.
In one embodiment, the editing interface calling instruction comprises attribute information of the first content editing box, and the attribute information represents that the first content editing box is used for text editing and/or graphic editing; and/or the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame, wherein the control information at least comprises one or more of the following: control type, control number and control position.
In the embodiment of the disclosure, when a user performs an operation of inputting text content on the smart television, a first content edit box is displayed on the smart television, and an edit interface call instruction is sent to the terminal, where the edit interface call instruction includes attribute information of the first content edit box, that is, an attribute that determines that the first content edit box is used for text editing, is used for graphic editing, and performs text editing and graphic editing simultaneously is determined.
The editing interface call instruction can also be control information of an editing control used for editing operation in the first content editing box. And the terminal generates an editing interface comprising a second content editing frame according to the received control information, namely according to the control type, the control number, the control position and the like.
Based on the same conception, the embodiment of the disclosure also provides a device for editing contents on the smart television.
It can be understood that, in order to implement the above functions, the apparatus for editing content on a smart television provided by the embodiments of the present disclosure includes a hardware structure and/or a software module corresponding to the execution of each function. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Fig. 8 is a block diagram illustrating an apparatus for editing content on a smart tv according to an exemplary embodiment. Referring to fig. 8, the apparatus 100 for editing content on a smart television includes an obtaining module 101, a receiving module 102 and a display module 103.
The obtaining module 101 is configured to obtain an editing interface call instruction sent by the smart television, where the editing interface call instruction is sent by the smart television in response to displaying the first content edit box.
A receiving module 102, configured to receive the user editing content in the second content editing box.
And the display module 103 is configured to display an editing interface, where the editing interface includes a second content editing frame and is displayed in the first content editing frame.
In an embodiment, the apparatus 100 for editing content on a smart tv further includes a synchronization module 104. The receiving module 102 is configured to receive user editing content in the second content editing frame, and the synchronization module 104 is configured to synchronize display in the first content editing frame; or the receiving module 102 is configured to receive the user-edited content in the second content edit box, and the synchronizing module 104 is configured to display the edited final content in the first content edit box.
In one embodiment, in response to the receiving module 102 receiving an editing instruction of the user in the second content editing frame, the synchronization module 104 correspondingly displays the edited content in the second content editing frame according to the editing instruction; and sending the editing instruction to the smart television, and correspondingly displaying the editing content in the first content editing frame.
In one embodiment, the editing content includes characters and/or graphics.
In one embodiment, the editing interface call instruction includes attribute information of the first content edit box. The display module displays an editing interface in the following mode: and determining and displaying an editing interface according to the attribute information, wherein the editing interface comprises a second content editing frame with the same attribute as the first content editing frame. The attribute information characterizes the first content edit box for text editing and/or for graphic editing.
In one embodiment, the editing interface call instruction includes control information for an editing control for performing an editing operation in the first content edit box. The control information at least comprises one or more of the following: control type, control number and control position. The apparatus 100 for editing content on a smart television further includes a generating module 105.
And the generating module 105 is configured to generate a control according to the control information, and generate an editing interface, where the editing interface includes a second content editing box and the generated control is used for performing editing operation in the second content editing box.
The apparatus 100 for editing content on a smart tv further includes a saving module 106.
And a saving module 106, configured to save the editing interface.
The display module 103 edits the interface in the following manner: and calling the saved editing interface and displaying.
Fig. 9 is a block diagram illustrating an apparatus for editing content on a smart tv according to an exemplary embodiment. Referring to fig. 9, the apparatus 200 for editing content on a smart tv is applied to the smart tv, and the apparatus 200 for editing content on the smart tv includes a display module 201 and a transmission module 202.
The display module 201 is configured to display a first content edit box and display user edit content in the first content edit box, where the user edit content is received in a second content edit box of the editing interface of the mobile terminal.
The sending module 202 is configured to send an edit interface call instruction to the mobile terminal in response to displaying the first content edit box.
In one embodiment, the display module displays the user-edited content in the first content edit box in the following manner: and synchronously displaying the user editing content received in the second content editing frame in the first content editing frame.
In an embodiment, the editing interface call instruction comprises attribute information of the first content edit box, and the attribute information represents that the first content edit box is used for text editing and/or used for graphic editing. And/or the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame. The control information at least comprises one or more of the following: control type, control number and control position.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 10 is a block diagram illustrating an apparatus 300 for editing content on a smart tv according to an example embodiment. For example, the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 300 may include one or more of the following components: a processing component 302, a memory 304, a power component 306, a multimedia component 308, an audio component 310, an input/output (I/O) interface 312, a sensor component 314, and a communication component 316.
The processing component 302 generally controls overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.
The memory 304 is configured to store various types of data to support operations at the apparatus 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 306 provide power to the various components of device 300. The power components 306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the apparatus 300.
The multimedia component 308 includes a screen that provides an output interface between the device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 300 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, audio component 310 includes a Microphone (MIC) configured to receive external audio signals when apparatus 300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for the device 300. For example, sensor assembly 314 may detect the open/closed status of device 300, the relative positioning of components, such as a display and keypad of device 300, the change in position of device 300 or a component of device 300, the presence or absence of user contact with device 300, the orientation or acceleration/deceleration of device 300, and the change in temperature of device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate wired or wireless communication between the apparatus 300 and other devices. The device 300 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 304 comprising instructions, executable by the processor 320 of the apparatus 300 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is understood that "a plurality" in this disclosure means two or more, and other words are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another, and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that, unless otherwise specified, "connected" includes direct connections between the two without the presence of other elements, as well as indirect connections between the two with the presence of other elements.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for editing content on a smart television is applied to a mobile terminal, and comprises the following steps:
acquiring an editing interface calling instruction sent by a smart television, wherein the editing interface calling instruction is sent by the smart television in response to displaying of a first content editing frame, and the editing interface calling instruction comprises attribute information of the first content editing frame;
determining an editing interface according to the attribute information, wherein the editing interface comprises a second content editing frame with the same attribute as the first content editing frame, and the attribute information represents that the first content editing frame is used for graphic editing or used for text editing and graphic editing;
the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position;
generating a control according to the control information, and generating an editing interface, wherein the editing interface comprises a second content editing frame and a generated control for performing editing operation in the second content editing frame;
saving the editing interface and displaying the editing interface, wherein the steps comprise: calling and displaying the saved editing interface;
displaying the editing interface, receiving user editing content in the second content editing frame, and displaying in the first content editing frame, including:
receiving user editing content in the second content editing frame and synchronously displaying in the first content editing frame; or
Receiving user editing content in the second content editing frame, and displaying edited final content in the first content editing frame;
the editing content comprises graphics, or characters and graphics.
2. The method of claim 1, wherein receiving user-edited content in the second content editing box and synchronously displaying in the first content editing box comprises:
responding to an editing instruction of a user received in the second content editing frame, and correspondingly displaying the edited content in the second content editing frame according to the editing instruction; and
and sending the editing instruction to the intelligent television, and correspondingly displaying the editing content in the first content editing frame.
3. A method for editing content on a smart television is applied to the smart television, and comprises the following steps:
responding to display of a first content editing frame, and sending an editing interface calling instruction to a mobile terminal, wherein the editing interface calling instruction comprises attribute information of the first content editing frame, and the attribute information represents that the first content editing frame is used for graphic editing or used for text editing and graphic editing; the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following information: control type, control number and control position;
and displaying user editing content in the first content editing frame, wherein the user editing content is received in a second content editing frame of the editing interface of the mobile terminal, and the editing content comprises graphics or characters and graphics.
4. The method of claim 3, wherein displaying user-edited content in the first content editing box comprises:
and synchronously displaying the user editing content received in the second content editing frame in the first content editing frame.
5. An apparatus for editing content on a smart television, which is applied to a mobile terminal, the apparatus comprising:
the intelligent television comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring an editing interface calling instruction sent by the intelligent television, and the editing interface calling instruction is sent by the intelligent television in response to a first content editing frame;
the receiving module is used for receiving user editing content in the second content editing frame, wherein the editing content comprises graphics or characters and graphics;
the storage module is used for storing the editing interface;
the display module edits the interface in the following way:
calling and displaying the saved editing interface;
the display module is used for displaying an editing interface, wherein the editing interface comprises the second content editing frame and is displayed in the first content editing frame;
the editing interface calling instruction comprises attribute information of the first content editing frame;
the display module displays an editing interface in the following mode:
determining and displaying an editing interface according to the attribute information, wherein the editing interface comprises a second content editing frame with the same attribute as the first content editing frame;
the attribute information represents that the first content editing frame is used for graphic editing or used for text editing and graphic editing;
a synchronization module;
the receiving module is used for receiving user editing content in the second content editing frame, and the synchronization module is used for synchronously displaying in the first content editing frame; or alternatively
The receiving module is used for receiving user editing content in the second content editing frame, and the synchronizing module is used for displaying the edited final content in the first content editing frame;
the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position;
the apparatus also includes a generation module;
the generating module is used for generating a control according to the control information and generating an editing interface, wherein the editing interface comprises a second content editing frame and the generated control for performing editing operation in the second content editing frame.
6. The apparatus according to claim 5, wherein in response to the receiving module receiving an editing instruction of a user in the second content editing frame, the synchronization module correspondingly displays the edited content in the second content editing frame according to the editing instruction; and sending the editing instruction to the smart television, and correspondingly displaying the editing content in the first content editing frame.
7. An apparatus for editing content on a smart television, applied to the smart television, the apparatus comprising:
the display module is used for displaying a first content editing frame and displaying user editing content in the first content editing frame, wherein the user editing content is received in a second content editing frame of an editing interface of the mobile terminal, and the editing content comprises graphics or characters and graphics;
the sending module is used for responding to the first content editing frame displayed and sending an editing interface calling instruction to the mobile terminal;
the editing interface calling instruction comprises attribute information of the first content editing box, and the attribute information represents that the first content editing box is used for graphic editing or used for text editing and graphic editing;
the editing interface calling instruction comprises control information of an editing control used for editing operation in the first content editing frame; the control information at least comprises one or more of the following: control type, control number and control position.
8. The apparatus of claim 7, wherein the display module displays the user-edited content in the first content editing box as follows:
and synchronously displaying the user editing content received in the second content editing frame in the first content editing frame.
9. An apparatus for editing content on a smart television, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the method for editing the content on the smart television as claimed in any one of claims 1 to 2 is executed.
10. A non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method for editing content on a smart tv as claimed in any one of claims 1 to 2.
11. An apparatus for editing content on a smart television, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: the method for editing the content on the intelligent television set is executed according to any one of claims 3 to 4.
12. A non-transitory computer readable storage medium, wherein instructions of the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the method for editing content on a smart tv as claimed in any one of claims 3 to 4.
CN202010622171.3A 2020-06-30 2020-06-30 Method and device for editing content on smart television and storage medium Active CN111866571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010622171.3A CN111866571B (en) 2020-06-30 2020-06-30 Method and device for editing content on smart television and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010622171.3A CN111866571B (en) 2020-06-30 2020-06-30 Method and device for editing content on smart television and storage medium

Publications (2)

Publication Number Publication Date
CN111866571A CN111866571A (en) 2020-10-30
CN111866571B true CN111866571B (en) 2023-03-24

Family

ID=72989442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010622171.3A Active CN111866571B (en) 2020-06-30 2020-06-30 Method and device for editing content on smart television and storage medium

Country Status (1)

Country Link
CN (1) CN111866571B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114527909A (en) * 2020-10-31 2022-05-24 华为技术有限公司 Equipment communication method, system and device
US20230403421A1 (en) * 2020-10-31 2023-12-14 Huawei Technologies Co., Ltd. Device Communication Method and System, and Apparatus
CN114301961A (en) * 2020-11-30 2022-04-08 海信视像科技股份有限公司 Equipment interaction method and display equipment
CN114398122B (en) * 2021-12-27 2023-08-29 北京百度网讯科技有限公司 Input method, input device, electronic equipment, storage medium and product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507206A (en) * 2016-09-23 2017-03-15 惠州Tcl移动通信有限公司 Intelligent television and its based on the character string input method of mobile terminal, system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101431566B (en) * 2008-12-16 2011-04-20 中兴通讯股份有限公司 Mobile terminal and method for providing user with shortcut operation
US20110032424A1 (en) * 2009-08-04 2011-02-10 Echostar Technologies Llc Systems and methods for graphically annotating displays produced in a television receiver
DE102010052244A1 (en) * 2010-11-23 2012-05-24 Pierre-Alain Cotte Method and device for displaying a graphical user interface of a portable computing unit on an external display device
JP2014071669A (en) * 2012-09-28 2014-04-21 Toshiba Corp Information display device, control method, and program
CN103618958A (en) * 2013-11-27 2014-03-05 深圳Tcl新技术有限公司 Method and device for inputting text information to television
CN104486684A (en) * 2014-12-18 2015-04-01 百度在线网络技术(北京)有限公司 Input method and device for electronic equipment
CN105992066B (en) * 2015-02-13 2019-07-02 Tcl集团股份有限公司 A kind of characters input method and character entry apparatus applied to smart machine
CN106162364B (en) * 2015-03-30 2020-01-10 腾讯科技(深圳)有限公司 Intelligent television system input method and device and terminal auxiliary input method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507206A (en) * 2016-09-23 2017-03-15 惠州Tcl移动通信有限公司 Intelligent television and its based on the character string input method of mobile terminal, system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多屏协同的智能电视人机交互***;黄兴旺等;《计算机应用与软件》(第11期);全文 *
电视机遥控器中文输入法的实现;张真波等;《计算机工程》(第02期);全文 *

Also Published As

Publication number Publication date
CN111866571A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111866571B (en) Method and device for editing content on smart television and storage medium
CN107908351B (en) Application interface display method and device and storage medium
EP3933565A1 (en) Screen projection method, screen projection device, and storage medium
CN109451341B (en) Video playing method, video playing device, electronic equipment and storage medium
CN111966314A (en) Image projection method, image projection device, mobile terminal and projection equipment
CN112905089B (en) Equipment control method and device
CN110968364B (en) Method and device for adding shortcut plugins and intelligent device
CN112463084A (en) Split screen display method and device, terminal equipment and computer readable storage medium
EP3896982A1 (en) Method and apparatus for inputting information on display interface, and storage medium
CN105955637B (en) Method and device for processing text input box
CN117119260A (en) Video control processing method and device
CN111246012B (en) Application interface display method and device and storage medium
CN114827721A (en) Video special effect processing method and device, storage medium and electronic equipment
CN112506393B (en) Icon display method and device and storage medium
CN110955328B (en) Control method and device of electronic equipment and storage medium
EP4202635A1 (en) Method and apparatus for sharing apps, and storage medium
CN111107624B (en) Negative one-screen synchronization method, negative one-screen synchronization device and electronic equipment
CN112055248B (en) Method, device and storage medium for setting television names
US20240056921A1 (en) Connection method and apparatus for wireless smart wearable device and storage medium
US20220291890A1 (en) Method for interaction between devices and medium
CN117640815A (en) Application circulation method and device, electronic equipment and storage medium
CN106910472B (en) Control method and device for adjusting backlight
CN115129431A (en) System event response method, device, terminal and storage medium
CN115016718A (en) Interface display method, interface display device and storage medium
CN117616393A (en) Window adjusting method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant