CN113625885A - Input method, input device and input device - Google Patents

Input method, input device and input device Download PDF

Info

Publication number
CN113625885A
CN113625885A CN202010384042.5A CN202010384042A CN113625885A CN 113625885 A CN113625885 A CN 113625885A CN 202010384042 A CN202010384042 A CN 202010384042A CN 113625885 A CN113625885 A CN 113625885A
Authority
CN
China
Prior art keywords
input
content
augmented
style
writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010384042.5A
Other languages
Chinese (zh)
Inventor
冯静静
蔡雅莉
鲁剑
王丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202010384042.5A priority Critical patent/CN113625885A/en
Publication of CN113625885A publication Critical patent/CN113625885A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an input method, an input device and a device for inputting. An embodiment of the method comprises: when detecting that a user triggers the expanding writing function, acquiring input related information; acquiring different styles of expanded writing contents based on input related information; and presenting the acquired at least one augmented writing content. The embodiment improves the input efficiency and the input convenience when the input intention of the user cannot be detected.

Description

Input method, input device and input device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an input method, an input device and an input device.
Background
In the input scenario, the user usually needs to input the content word by word and sentence by sentence. If the user can be provided with the selectable input content in the input process of the user, the input efficiency of the user can be greatly improved.
In the prior art, when it is detected that a user has a certain input intention, the input method application may automatically acquire the associated content of the intention, so as to integrate the associated content with the content input by the user and provide the integrated content to the user. However, the input intention of the user is not easy to detect, and when the exact input intention of the user is not detected, the user cannot be provided with the extension content. At this time, the user still needs to manually input the content, which results in that the input efficiency of the user cannot be effectively improved, and especially for the user with poor searching and editing capability, the process brings more inconvenience to the user.
Disclosure of Invention
The embodiment of the application provides an input method, an input device and an input device, so that when the exact input intention of a user is not detected, different styles of optional augmented content can be provided, and the input efficiency and the input convenience of the user are improved.
In a first aspect, an embodiment of the present application provides an input method, where the method includes: when detecting that a user triggers the expanding writing function, acquiring input related information; acquiring different styles of expanded writing contents based on input related information; and presenting the acquired at least one augmented writing content.
In a second aspect, an embodiment of the present application provides an input device, including: a first acquisition unit configured to acquire input-related information when detecting that a user triggers an extended write function; a second acquisition unit configured to acquire different styles of the extension contents based on the input related information; a first presentation unit configured to present the acquired at least one augmented write content.
In a third aspect, an embodiment of the present application provides an apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs are configured to be executed by the one or more processors and include instructions for: when detecting that a user triggers the expanding writing function, acquiring input related information; acquiring different styles of expanded writing contents based on input related information; and presenting the acquired at least one augmented writing content.
In a fourth aspect, embodiments of the present application provide a computer-readable medium on which a computer program is stored, which when executed by a processor, implements the method as described in the first aspect above.
According to the input method, the input device and the input device provided by the embodiment of the application, when the fact that the user triggers the write-extending function is detected, the input related information is obtained, then the write-extending contents in different styles are obtained based on the input related information, and finally the obtained at least one write-extending content is displayed. Therefore, when the exact augmented writing input intention of the user is not detected, the user can actively trigger the starting of the augmented writing function and provide augmented writing contents of different styles, so that the input efficiency and the input convenience of the user are improved. In addition, the obtained expanded writing contents have different styles, and the diversity of the expanded writing contents is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is a flow diagram of one embodiment of an input method according to the present application;
FIG. 2 is a schematic diagram of an augmented written content presentation process according to the present application;
FIG. 3 is a flow diagram of yet another embodiment of an input method according to the present application;
FIG. 4 is a schematic diagram of an augmented write content refresh process according to the present application;
FIG. 5 is yet another schematic diagram of an augmented write content refresh process according to the present application;
FIG. 6 is a schematic diagram of an embodiment of an input device according to the present application;
FIG. 7 is a schematic diagram of a structure of an apparatus for input according to the present application;
FIG. 8 is a schematic diagram of a server in accordance with some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to FIG. 1, a flow 100 of one embodiment of an input method according to the present application is shown. The input method can be operated in various electronic devices including but not limited to: a server, a smart phone, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III) player, an MP4 (Moving Picture Experts Group Audio Layer IV) player, a laptop, a car computer, a desktop computer, a set-top box, an intelligent tv, a wearable device, and so on.
The electronic device may be installed with various types of client applications, such as an input method application, an instant messaging application, social platform software, and the like.
The input method application mentioned in the embodiment of the application can support various input methods. The input method may be an encoding method used for inputting various symbols to electronic devices such as computers and mobile phones, and a user may conveniently input a desired character or character string to the electronic devices using the input method application. It should be noted that, in the embodiment of the present application, in addition to the common chinese input method (such as pinyin input method, wubi input method, zhuyin input method, phonetic input method, handwriting input method, etc.), the input method may also support other languages (such as english input method, japanese hiragana input method, korean input method, etc.), and the input method and the language category of the input method are not limited at all.
The input method in this embodiment may include the following steps:
step 101, when detecting that a user triggers the write-extending function, acquiring input related information.
In the present embodiment, various types of client applications, such as an input method application, an instant messaging application, a document editing application, and the like, may be installed in an execution body of an input method (such as the electronic device described above). Wherein the input method application may be configured with an augment writing function. The expanding and writing function is a function of expanding and writing the input content of the user into a sentence or a segment.
In this embodiment, when it is detected that the user triggers the write-extend function, the execution main body may acquire input-related information. In practice, the write-extend function may be triggered in a number of ways.
By way of example, the input method interface may display a keyboard area and various function keys, such as a voice input function key, an applet function key, a search function key, an expression input function key, a diffusion function key, and the like. When the user triggers (such as clicks) the extended function key, the extended function applied by the input method can be triggered. The above-mentioned extended function key can be displayed in various forms, and the embodiment does not limit the form of the extended function key.
As yet another example, a user may trigger the write-extend function by entering content in an input method application. For example, when a user inputs target content, such as content of 'extend writing', etc., through a code input mode or a voice input mode, the extend writing function can be triggered.
In this embodiment, the input related information may include, but is not limited to, at least one of: the input content of the user, the context information of the input content, the input scene of the user, the personal preference of the user, the historical behavior data of the user in the input process and the like.
The input content may refer to text content that the user is currently editing but has not yet sent. As an example, in a scenario where a local user is instant messaging with an opposite user through some instant messaging application, the input content may be an instant messaging message that the local user is currently editing but has not yet been sent to the opposite user.
And 102, acquiring the extension contents with different styles based on the input related information.
In this embodiment, the execution subject may obtain different styles of the augmented writing content based on the input related information. As an example, the execution subject may first extract a keyword from the input related information, and then may retrieve different styles of sentences or paragraphs containing the keyword from the internet or a database, thereby obtaining different styles of augmented content.
The style of the extension content may be set in advance as needed. Optionally, the genre of the augmentation content may include, but is not limited to, at least one of: poetry style, literature style, white language style and the like.
The poetry style augmentation content can include, but is not limited to, ancient poems, poems close to the body, and temperament words. The above-mentioned ancient poems may include, but are not limited to, forms of tetradic, pentadic, and heptadic poems, and miscellaneous poems, etc. Such close-up poems may include, but are not limited to, abstinence, rhythmic poems, temperaments, and the like.
The art-style augmentation content may be content that contains words of multiple art types. A vocabulary of the art type may be preset, and words in the vocabulary may be regarded as words of the art type. If a certain extended content includes a plurality of (the specific number may be preset or dynamically set based on the length of the content) words of the art types, the extended content may be considered as the extended content of the art types.
The white-language style may be a language style that approximates a chat scenario. The white-language style augmented content can be obtained by taking the instant messaging message in the instant messaging scene as a corpus.
In some optional implementations of the embodiment, the obtained augmented content of each style may include input content. For example, the input content is "sunny". At this time, the content of the poetry style can be' the sun is clear but the sun shines west, is not like the foggy feeling among people, is the indefinite place of spring light, and is the flower of east monarch. The art style expanded writing content can be 'fine day, rain stop, i just start to want you again'. The white-language-style augmented writing content may be' a fair, a pub played by a person? Where you want to go.
In some optional implementation manners of this embodiment, the executing entity may obtain different styles of augmented content by using a pre-trained text generation model. The text generation model is used for representing the corresponding relation between the input related information and the expanded writing content, and the expanded writing content output by different text generation models is different in style. And inputting the input related information into each pre-trained text generation model to obtain the expanded contents output by each text generation model.
In one scenario, each text generation model may be deployed locally to the execution agent, such as in a data package of an input method application. At this time, the execution body may directly input the input related information to each text generation model to obtain the different styles of the augmented content.
In another scenario, the text generation model may be deployed at a server, such as an input method server. The input method server is a server for providing support for input method application. The execution agent may send the input-related information to the server by sending a request to the server. After the server acquires the input related information carried in the request, the input related information can be input into each pre-trained text generation model, and the expanded contents output by each text generation model are obtained. After obtaining the augmented content, the server can return the augmented content to the execution main body.
It should be noted that the text generation model in this embodiment may be obtained by training in advance based on a machine learning method. In practice, an existing open-source text generation model may be used as a base model, and a required text generation model may be obtained by Fine-tuning (Fine-tune) the base model. The open-source text generation model may include, but is not limited to, a text generation model constructed and trained based on a model such as LSTM (Long Short-Term Memory), RNN (Recurrent Neural Network), and the like.
For different styles, different corpus can be built. And (4) finely adjusting the basic model by using corpus sets of different styles in advance to obtain a text generation model for generating the expanded writing contents of different styles.
As an example, the poetry style corpus is a poetry corpus. For the white-language style, the input content in the instant messaging scene can be used as the corpus to construct the corpus set of the white-language style. The corpora in the corpus are all corpora in daily communication, such as' do you have a meal "," somehow boring, you are at dry, and the like. For the literature style, a corpus can be constructed based on sentences containing more literature style vocabularies, such as prose. The corpus in the corpus collection usually contains more literature-style vocabularies, such as "sky blue and other smokes and rains", but I wait you "," when thinking of a person, just want to look at the cloud of the sky and look like ", and the like.
In some alternative implementations of the present embodiment, different input scenarios may be associated with text generation models for generating different styles of augmented content. For example, the instant messaging scene may be associated with a text generation model for generating a white-word style, a literature style, and a poetry style, and the document editing scene may be associated with a text generation model for generating a white-word style, a poetry style, and the like.
Thus, the execution body may first detect a current input scene. Then, each text generation model associated with the current input scene is selected. And inputting the input related information into each selected text generation model to obtain the expanded contents output by each text generation model. When detecting the current input scene, the detection may be performed in various manners, such as acquiring context information and identifying a screen image, and the detection manner of the input scene is not limited in this embodiment.
And step 103, presenting the acquired at least one augmented writing content.
In this embodiment, after acquiring the augmented writing contents of different styles, the execution subject may present the acquired at least one augmented writing content in the input method interface. In practice, all the extended contents can be displayed at the same time, or part of the extended contents (such as one extended content or two extended contents) can be displayed each time, and when the user triggers the extended content refreshing function, the displayed extended contents are replaced. Such as replacing with an extension of another genre, an extension of the same genre but not yet shown, and so forth.
As an example, fig. 2 shows a schematic diagram of an augmented content presentation process. As shown in FIG. 2, after the user enters "sunny" an icon is clicked which may be used to trigger an augment writing function (as indicated by reference numeral 201). After the execution main body detects that the user triggers the icon, acquiring input related information containing input content 'clear sky' and acquiring different styles of extension content based on the input related information. Then, one of the obtained augmented writing contents can be displayed, for example, the augmented writing content of the poetry style is that the sky is clear but the sun and the west shine, which is different from the foggy and lusterless places among people, and the east monarch is a flower in what way.
In some optional implementation manners of this embodiment, when the executing main body presents the acquired at least one augmented writing content, a style identifier of the presented augmented writing content may also be displayed, where the style identifier is used to indicate a style of the augmented writing content. For example, the style identification of the poetry style can be "poetry", "poetry style", and the like. The style identification of the art style can be "art", "art style", and the like, and the style of the style identification is not limited in this embodiment.
In some optional implementation manners of this embodiment, if it is detected that the user continuously triggers the refresh button for multiple times, if the triggering times reach the preset value, but any augmented content is not yet selected, it may be considered that the user is not satisfied with the augmented content of the current style. At this time, the augmented writing contents of other styles can be switched. For example, another augmented content with a different style from each displayed augmented content can be selected from the non-displayed augmented content for displaying.
In some optional implementation manners of this embodiment, when detecting any one of the displayed augmented written contents triggered by the user, the execution main body may use the augmented written content triggered by the user as a target content, and display or send the target content. Thereby improving the input efficiency of the user.
According to the method provided by the embodiment of the application, when the fact that the user triggers the write-extending function is detected, the input related information is acquired, then the write-extending contents of different styles are acquired based on the input related information, and finally the acquired at least one write-extending content is displayed. Therefore, when the exact augmented writing input intention of the user is not detected, the user can actively trigger the starting of the augmented writing function and provide augmented writing contents of different styles, so that the input efficiency and the input convenience of the user are improved. In addition, the obtained expanded writing contents have different styles, and the diversity of the expanded writing contents is improved.
With further reference to FIG. 3, a flow 300 of yet another embodiment of an input method is shown. The process 300 of the input method includes the following steps:
step 301, when detecting that the user triggers the write-extending function, acquiring input related information.
Step 302, based on the input related information, obtaining the extension contents of different styles.
Step 301 to step 302 of this embodiment can refer to step 101 to step 102 of the corresponding embodiment in fig. 1, and are not described herein again.
Step 303, determining the priority of each style based on the input related information.
In this embodiment, the input related information may include at least one of: input scenes, input content, style of augmented content historically selected by the user, etc. The execution body may determine the priority of each style based on the input related information.
As an example, the execution body may determine the priority of each style based on the input scene. The same style may also have different priorities under different input scenarios. Meanwhile, different styles may have different priorities in the same input scenario. The priorities of different styles in different input scenes can be preset as required. For example, in an instant messaging scene, a poetry style, a literature style and a white language style can be set in sequence according to the priority from high to low. In a document editing scene, a literature style, a poetry style and a white language style can be set in sequence according to the order of the priority from high to low.
As yet another example, the execution body may determine the priority of each genre based on the current input content. Specifically, the genre of the current input content may be detected first. Then, the genre is set to the highest priority, and the remaining genres are set to lower priorities.
As still another example, the execution subject may determine the priority of each genre based on the genre of the extension content selected by the user history. Specifically, the frequency of historically selected styles of augmented content may be counted. The more frequent genres are set to a higher priority and the less frequent genres are set to a lower priority.
In addition, the execution subject may also determine the priority of each style by combining the two or more manners, such as determining the priority in different manners and then performing a priority weighting calculation. This implementation is not described in detail.
And step 304, presenting the augmentation content of at least one style with the highest priority.
In this embodiment, the execution subject may select one or more augmented contents of a genre with the highest priority from the acquired augmented contents for presentation. For example, the style with the highest priority in the instant messaging scene is the poetry style, and the execution subject obtains the expanded writing contents of 6 poetry styles in total. At this time, one or more poetry style augmentation contents can be selected for display. Before selecting one or more poetry-style augmentation contents to be displayed, the execution main body can also sequence the augmentation contents. Here, the extended content of the same genre may be sorted in various sorting manners. As an example, the augmented content may be ordered based on similarity of the input content to the augmented content. The augmented written contents can also be sorted based on the source of the augmented written contents, such as manually pre-created augmented written contents are sorted in the front, augmented written contents from the internet are sorted in the back, and the like.
Step 305, when it is detected that the user triggers the updating function of the augmented writing contents, determining whether the acquired augmented writing contents have the augmented writing contents which have the same style as the currently displayed augmented writing contents and are not displayed.
In this embodiment, the augmented content may be presented in an augmented content panel. The augmented write content panel may include a refresh button that may be used to trigger the augmented write content refresh function. When the user triggers (e.g., clicks) the refresh button, the displayed augmented content can be replaced.
Specifically, when it is detected that the user triggers the augmented content refreshing function, the execution subject may determine whether there is any augmented content that is not shown and has the same style as the currently shown augmented content in the acquired augmented content. For example, if the currently displayed augmented writing content is the augmented writing content in the poetry style, whether the augmented writing content which is in the same poetry style but not yet displayed exists can be detected. If so, next step 306 may be performed. If not, the following step 307 may be performed.
And step 306, if yes, replacing the currently displayed augmented writing contents with at least one item of the non-displayed augmented writing contents.
In this embodiment, if there is an undepicted augmented written content with the same style as the currently displayed augmented written content, the execution main body may replace the currently displayed augmented written content with at least one of the undepicted augmented written contents with the same style.
And 307, if the style of the next priority does not exist, taking the style of the next priority as the target style, and replacing the currently displayed augmented writing content with at least one item of augmented writing content of the target style.
In this embodiment, if there is no undelayed augmented content with the same style as the currently-presented augmented content, the execution subject may replace the currently-presented augmented content with at least one item of augmented content of the target style by using the style of the next priority as the target style.
For example, the priority is in the order of poetry style, literature style and white language style from high to low. If the currently displayed expanded writing contents are in the poetry style and the expanded writing contents in the poetry style are displayed, at least one art style expanded writing content of the currently displayed expanded writing contents can be displayed.
By way of example, FIG. 4 shows a schematic diagram of an augmented write content refresh process. As shown in fig. 4, the currently displayed expanded writing content is a poetry style expanded writing content, "sunny days, and sunny days, unlike misty flowers among people, so that the spring light is indefinite, and the east monarch is a traditional flower. After the user clicks the refresh button, if the fact that all poetry-style expanded writing contents are displayed is detected, the current expanded writing contents can be replaced by the literature-style expanded writing contents, namely that the day is clear, the rain stops, and the user starts to want you again.
As yet another example, FIG. 5 shows yet another schematic diagram of an augmented write content refresh process. As shown in fig. 5, the currently displayed content is the art-style content, "i start to want you again when the weather is clear and the rain is stopped". After the user clicks the refresh button, if it is detected that the art-style augmented writing contents are all displayed, the current augmented writing contents can be replaced with the white-language-style augmented writing contents, "a fair day, a pub of people? Where you want to go.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 1, the flow 200 of the input method in this embodiment relates to a step of displaying the augmented content and a step of refreshing the augmented content according to the genre priority. Therefore, the expanded contents most likely to be selected by the user can be preferentially displayed, and the input efficiency and the input convenience of the user are further improved.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present application provides an embodiment of an input device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the input device 600 according to the present embodiment includes: a first acquisition unit 601 configured to acquire input related information when detecting that a user triggers an augment writing function; a second obtaining unit 602 configured to obtain different styles of augmented written content based on the input related information; a first rendering unit 603 configured to render the acquired at least one augmented write content.
In some optional implementations of the present embodiment, the second obtaining unit 602 is further configured to: and inputting the input related information into each pre-trained text generation model to obtain the expanded contents output by each text generation model, wherein the text generation models are used for representing the corresponding relation between the input related information and the expanded contents, and the expanded contents output by different text generation models are different in style.
In some optional implementations of the present embodiment, the second obtaining unit 602 is further configured to: detecting a current input scene; selecting each text generation model associated with the input scene; and inputting the input related information into each selected text generation model to obtain the expanded contents output by each text generation model.
In some optional implementations of this embodiment, the input related information includes at least one of: inputting scenes, input contents and styles of the expanded contents selected by the user history; and, the first presenting unit 603 is further configured to: determining the priority of each style based on the input related information; and presenting the augmentation content of at least one highest-priority style.
In some optional implementations of this embodiment, the apparatus further includes: a second presentation unit configured to: when detecting that a user triggers an expanded writing content refreshing function, determining whether the expanded writing contents which are the same as the style of the expanded writing contents currently displayed and are not displayed exist in the obtained expanded writing contents; if yes, replacing the currently displayed expanded writing contents with at least one item of the non-displayed expanded writing contents; and if the style of the next priority does not exist, taking the style of the next priority as the target style, and replacing the currently displayed augmented writing content with at least one item of augmented writing content of the target style.
In some optional implementations of this embodiment, the first presenting unit 603 is further configured to: and displaying the acquired at least one augmented writing content, and displaying a style identification of the displayed augmented writing content, wherein the style identification is used for indicating the style of the augmented writing content.
In some optional implementation manners of this embodiment, the input related information includes input content, and the obtained augmented writing content of each style includes the input content.
In some optional implementations of the embodiment, the different styles of augmented content include at least one of the following: the poetry style expanded writing content, the literature style expanded writing content and the white language style expanded writing content.
According to the device provided by the embodiment of the application, when the fact that the user triggers the write-extending function is detected, the input related information is acquired, then the write-extending contents of different styles are acquired based on the input related information, and finally the acquired at least one write-extending content is displayed. Therefore, when the exact augmented writing input intention of the user is not detected, the user can actively trigger the starting of the augmented writing function and provide augmented writing contents of different styles, so that the input efficiency and the input convenience of the user are improved. In addition, the obtained expanded writing contents have different styles, and the diversity of the expanded writing contents is improved.
Fig. 7 is a block diagram illustrating an apparatus 700 for input according to an example embodiment, where the apparatus 700 may be an intelligent terminal or a server. For example, the apparatus 700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 702 may include one or more processors 720 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 702 may include one or more modules that facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operations at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and a user as described above. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of apparatus 700, the change in position of apparatus 700 or a component of apparatus 700, the presence or absence of user contact with apparatus 700, the orientation or acceleration/deceleration of apparatus 700, and the change in temperature of apparatus 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the device 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a schematic diagram of a server in some embodiments of the present application. The server 800, which may vary significantly depending on configuration or performance, may include one or more Central Processing Units (CPUs) 822 (e.g., one or more processors) and memory 832, one or more storage media 830 (e.g., one or more mass storage devices) storing applications 842 or data 844. Memory 832 and storage medium 830 may be, among other things, transient or persistent storage. The program stored in the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 822 may be provided in communication with the storage medium 830 for executing a series of instruction operations in the storage medium 830 on the server 800.
The server 800 may also include one or more power supplies 826, one or more wired or wireless network interfaces 850, one or more input-output interfaces 858, one or more keyboards 856, and/or one or more operating systems 841, such as Windows Server, Mac OS XTM, UnixTM, Linux, FreeBSDTM, etc.
A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an apparatus (smart terminal or server), enable the apparatus to perform an input method, the method comprising: when detecting that a user triggers the expanding writing function, acquiring input related information; acquiring different styles of expanded writing contents based on the input related information; and presenting the acquired at least one augmented writing content.
Optionally, the obtaining of the augmented written content of different styles based on the input related information includes: and inputting the input related information into each pre-trained text generation model to obtain the expanded contents output by each text generation model, wherein the text generation models are used for representing the corresponding relation between the input related information and the expanded contents, and the expanded contents output by different text generation models are different in style.
Optionally, the inputting the relevant input information into each pre-trained text generation model to obtain the augmented writing content output by each text generation model includes: detecting a current input scene; selecting each text generation model associated with the input scene; and inputting the input related information into each selected text generation model to obtain the expanded contents output by each text generation model.
Optionally, the input related information includes at least one of: inputting scenes, input contents and styles of the expanded contents selected by the user history; and, said presenting the obtained at least one augmented written content, comprising: determining the priority of each style based on the input related information; and presenting the augmentation content of at least one highest-priority style.
Optionally, the device being configured to execute the one or more programs by the one or more processors includes instructions for: when detecting that a user triggers an expanded writing content refreshing function, determining whether the expanded writing contents which are the same as the style of the expanded writing contents currently displayed and are not displayed exist in the obtained expanded writing contents; if yes, replacing the currently displayed expanded writing contents with at least one item of the undisplayed expanded writing contents; and if the style of the next priority does not exist, taking the style of the next priority as the target style, and replacing the currently displayed augmented writing content with at least one item of augmented writing content of the target style.
Optionally, the presenting the obtained at least one augmented written content includes: and displaying the acquired at least one augmented writing content, and displaying a style identification of the displayed augmented writing content, wherein the style identification is used for indicating the style of the augmented writing content.
Optionally, the input related information includes input content, and the obtained extension content of each style includes the input content.
Optionally, the different styles of augmented writing contents include at least one of the following: the poetry style expanded writing content, the literature style expanded writing content and the white language style expanded writing content.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
The present application provides an input method, an input device and an input device, and the principles and embodiments of the present application are described herein using specific examples, and the descriptions of the above examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An input method, characterized in that the method comprises:
when detecting that a user triggers the expanding writing function, acquiring input related information;
acquiring different styles of expanded writing contents based on the input related information;
and presenting the acquired at least one augmented writing content.
2. The method of claim 1, wherein obtaining different styles of augmented written content based on the input related information comprises:
and inputting the input related information into each pre-trained text generation model to obtain the expanded contents output by each text generation model, wherein the text generation models are used for representing the corresponding relation between the input related information and the expanded contents, and the expanded contents output by different text generation models are different in style.
3. The method according to claim 2, wherein the inputting the input related information into each pre-trained text generation model to obtain the augmented written content output by each text generation model comprises:
detecting a current input scene;
selecting each text generation model associated with the input scene;
and inputting the input related information into each selected text generation model to obtain the expanded contents output by each text generation model.
4. The method of claim 1, wherein the input-related information comprises at least one of: inputting scenes, input contents and styles of the expanded contents selected by the user history; and the number of the first and second groups,
the presenting the obtained at least one augmented written content comprises:
determining the priority of each style based on the input related information;
and presenting the augmentation content of at least one highest-priority style.
5. The method of claim 4, further comprising:
when detecting that a user triggers an expanded writing content refreshing function, determining whether the expanded writing contents which are the same as the style of the expanded writing contents currently displayed and are not displayed exist in the obtained expanded writing contents;
if yes, replacing the currently displayed expanded writing contents with at least one item of the undisplayed expanded writing contents;
and if the style of the next priority does not exist, taking the style of the next priority as the target style, and replacing the currently displayed augmented writing content with at least one item of augmented writing content of the target style.
6. The method of claim 1, wherein the presenting the obtained at least one augmented written content comprises:
and displaying the acquired at least one augmented writing content, and displaying a style identification of the displayed augmented writing content, wherein the style identification is used for indicating the style of the augmented writing content.
7. The method of claim 1, wherein the input-related information comprises input content, and the obtained augmented content of each style comprises the input content.
8. An input device, the device comprising:
a first acquisition unit configured to acquire input-related information when detecting that a user triggers an extended write function;
a second acquisition unit configured to acquire different styles of the extension contents based on the input related information;
a first presentation unit configured to present the acquired at least one augmented write content.
9. An apparatus for input, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
when detecting that a user triggers the expanding writing function, acquiring input related information;
acquiring different styles of expanded writing contents based on the input related information;
and presenting the acquired at least one augmented writing content.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010384042.5A 2020-05-08 2020-05-08 Input method, input device and input device Pending CN113625885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010384042.5A CN113625885A (en) 2020-05-08 2020-05-08 Input method, input device and input device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010384042.5A CN113625885A (en) 2020-05-08 2020-05-08 Input method, input device and input device

Publications (1)

Publication Number Publication Date
CN113625885A true CN113625885A (en) 2021-11-09

Family

ID=78377395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010384042.5A Pending CN113625885A (en) 2020-05-08 2020-05-08 Input method, input device and input device

Country Status (1)

Country Link
CN (1) CN113625885A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634905A (en) * 2009-07-01 2010-01-27 广东国笔科技股份有限公司 Intelligent association input system and method
CN104035966A (en) * 2014-05-16 2014-09-10 百度在线网络技术(北京)有限公司 Method and device for providing extended search terms
CN104484057A (en) * 2014-12-04 2015-04-01 百度在线网络技术(北京)有限公司 Associative result providing method and device
CN105094569A (en) * 2015-09-15 2015-11-25 北京金山安全软件有限公司 Information prompting method and device and electronic equipment
CN106855748A (en) * 2015-12-08 2017-06-16 阿里巴巴集团控股有限公司 A kind of data inputting method, device and intelligent terminal
CN106896935A (en) * 2017-02-22 2017-06-27 李晓明 Input method
CN107688398A (en) * 2016-08-03 2018-02-13 中国科学院计算技术研究所 Determine the method and apparatus and input reminding method and device of candidate's input
CN107831915A (en) * 2017-10-17 2018-03-23 北京三快在线科技有限公司 One kind input complementing method, device, electronic equipment and readable storage medium storing program for executing
CN108541310A (en) * 2016-06-22 2018-09-14 华为技术有限公司 A kind of method, apparatus and graphic user interface of display candidate word
CN109635253A (en) * 2018-11-13 2019-04-16 平安科技(深圳)有限公司 Text style conversion method, device and storage medium, computer equipment
CN109783244A (en) * 2017-11-10 2019-05-21 北京搜狗科技发展有限公司 Treating method and apparatus, the device for processing
CN110377766A (en) * 2018-04-11 2019-10-25 北京搜狗科技发展有限公司 A kind of data processing method, device and electronic equipment
CN110457661A (en) * 2019-08-16 2019-11-15 腾讯科技(深圳)有限公司 Spatial term method, apparatus, equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101634905A (en) * 2009-07-01 2010-01-27 广东国笔科技股份有限公司 Intelligent association input system and method
CN104035966A (en) * 2014-05-16 2014-09-10 百度在线网络技术(北京)有限公司 Method and device for providing extended search terms
CN104484057A (en) * 2014-12-04 2015-04-01 百度在线网络技术(北京)有限公司 Associative result providing method and device
CN105094569A (en) * 2015-09-15 2015-11-25 北京金山安全软件有限公司 Information prompting method and device and electronic equipment
CN106855748A (en) * 2015-12-08 2017-06-16 阿里巴巴集团控股有限公司 A kind of data inputting method, device and intelligent terminal
CN108541310A (en) * 2016-06-22 2018-09-14 华为技术有限公司 A kind of method, apparatus and graphic user interface of display candidate word
CN107688398A (en) * 2016-08-03 2018-02-13 中国科学院计算技术研究所 Determine the method and apparatus and input reminding method and device of candidate's input
CN106896935A (en) * 2017-02-22 2017-06-27 李晓明 Input method
CN107831915A (en) * 2017-10-17 2018-03-23 北京三快在线科技有限公司 One kind input complementing method, device, electronic equipment and readable storage medium storing program for executing
CN109783244A (en) * 2017-11-10 2019-05-21 北京搜狗科技发展有限公司 Treating method and apparatus, the device for processing
CN110377766A (en) * 2018-04-11 2019-10-25 北京搜狗科技发展有限公司 A kind of data processing method, device and electronic equipment
CN109635253A (en) * 2018-11-13 2019-04-16 平安科技(深圳)有限公司 Text style conversion method, device and storage medium, computer equipment
CN110457661A (en) * 2019-08-16 2019-11-15 腾讯科技(深圳)有限公司 Spatial term method, apparatus, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110222256B (en) Information recommendation method and device and information recommendation device
CN109783244B (en) Processing method and device for processing
CN112083811B (en) Candidate item display method and device
CN113625885A (en) Input method, input device and input device
WO2020056948A1 (en) Method and device for data processing and device for use in data processing
CN113515618A (en) Voice processing method, apparatus and medium
CN114610163A (en) Recommendation method, apparatus and medium
CN113221030A (en) Recommendation method, device and medium
CN112306251A (en) Input method, input device and input device
CN112363631A (en) Input method, input device and input device
CN113534973B (en) Input method, device and device for inputting
CN112905079B (en) Data processing method, device and medium
CN110716653B (en) Method and device for determining association source
WO2022105229A1 (en) Input method and apparatus, and apparatus for inputting
CN111381685B (en) Sentence association method and sentence association device
CN113434045A (en) Input method, input device and input device
CN115705095A (en) Input association method and device for input association
CN112783333A (en) Input method, input device and input device
CN113495656A (en) Input method, input device and input device
CN114253404A (en) Input method, input device and input device
CN112445347A (en) Input method, input device and input device
CN113342183A (en) Input method, input device and input device
CN115373523A (en) Input method, input device and input device
CN113885714A (en) Input method, apparatus and medium
CN114510154A (en) Input method, input device and input device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination