CN106325688A - Text processing method and device - Google Patents

Text processing method and device Download PDF

Info

Publication number
CN106325688A
CN106325688A CN201610681142.8A CN201610681142A CN106325688A CN 106325688 A CN106325688 A CN 106325688A CN 201610681142 A CN201610681142 A CN 201610681142A CN 106325688 A CN106325688 A CN 106325688A
Authority
CN
China
Prior art keywords
text
participle
word
segmentation result
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610681142.8A
Other languages
Chinese (zh)
Other versions
CN106325688B (en
Inventor
罗永浩
田作辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing Hammer Numeral Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hammer Numeral Science And Technology Co Ltd filed Critical Beijing Hammer Numeral Science And Technology Co Ltd
Priority to CN201610681142.8A priority Critical patent/CN106325688B/en
Publication of CN106325688A publication Critical patent/CN106325688A/en
Application granted granted Critical
Publication of CN106325688B publication Critical patent/CN106325688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a text processing method. The text processing method comprises the following steps of: acquiring location information of an external touch in response to the external touch sensed by a touch terminal; determining a word segmentation area according to the location information of the external touch; recognizing characters in the word segmentation area to obtain a first text; performing word segmentation to characters in the first text to obtain a word segmentation result; displaying the word segmentation result. The text processing method and the device provided by the invention combine touch sensing, character recognition and word segmentation, the characters, words, named entities and the like in the area designated through the external touch can be efficiently obtained, the user can conveniently and directly select the keywords in the text, the user does not need to additionally input keywords in subsequent operation and the operation efficiency is improved.

Description

A kind of text handling method and device
Technical field
The present invention relates to human-computer interaction technique field, particularly relate to a kind of text handling method and device.
Background technology
At present, people can receive substantial amounts of Word message, such as every day on the touch control terminal such as mobile phone or panel computer The message propelling movement etc. of each big application such as note and instant messaging class software.When the user of touch control terminal wants in Word message When key word interested carries out operating (such as searching for the key word in Word message or the key word sharing in Word message), Need to carry out many more manipulations, operation is time-consuming long, the most convenient.
Therefore, those skilled in the art need to provide a kind of text handling method and device, it is possible to facilitate user to text In key word operate.
Summary of the invention
In order to solve prior art problem, the invention provides a kind of text handling method and device, it is possible to facilitate user Key word in text is operated.
Embodiments provide a kind of text handling method, including:
The extraneous touch-control sensed in response to touch control terminal, obtains the positional information of described extraneous touch-control;
According to the positional information of described extraneous touch-control, determine participle region;
Identify the word in described participle region, obtain the first text;
Word in described first text is carried out participle, obtains word segmentation result;
Show described word segmentation result.
Preferably, the described positional information according to described extraneous touch-control, determine participle region, specifically include:
Obtain the zone position information of each viewing area on described touch control terminal;
The zone position information of each viewing area on positional information according to described extraneous touch-control and described touch control terminal, Detect described extraneous touch-control and the position relationship of each viewing area on described touch control terminal one by one;
When described extraneous touch-control falls in the first viewing area, it is determined that described first viewing area is described participle district Territory, described first viewing area is a viewing area on described touch control terminal.
Preferably, the described word segmentation result of described display, specifically include:
Generate participle display interface and at least one view control;
Each word in described word segmentation result is added separately in a view control;
Described participle display interface shows whole view control.
Preferably, described word in described first text is carried out participle, obtains word segmentation result, specifically include:
Judge that whether the character quantity of described first text is more than preset value;
If it is not, then words whole in described first text are carried out participle, obtain described word segmentation result;
If it is, according to the positional information of described extraneous touch-control, determine the second text, and to complete in described second text Portion's word carries out participle, obtains described word segmentation result, and described first text includes the whole words in described second text, and institute State the quantity of character in the second text and be equal to described preset value.
Preferably, the described word segmentation result of described display, the most also include:
Receiving the Keyword Selection instruction that user triggers, the instruction of described Keyword Selection is to send according to described word segmentation result 's;
Instruct according to described Keyword Selection, from described word segmentation result, obtain the key word that described user selects;
Show described key word;
Receiving the key word operational order that described user triggers, described key word operational order carries action type, institute State action type to include searching for and sharing;
According to described action type, described key word is operated.
The embodiment of the present invention additionally provides a kind of text processing apparatus, including: acquiring unit, determine unit, recognition unit, Participle unit and display unit;
Described acquiring unit, for the extraneous touch-control sensed in response to touch control terminal, obtains the position of described extraneous touch-control Confidence ceases;
Described determine unit, for the positional information according to described extraneous touch-control, determine participle region;
Described recognition unit, for identifying the word in described participle region, obtains the first text;
Described participle unit, for the word in described first text is carried out participle, obtains word segmentation result;
Described display unit, is used for showing described word segmentation result.
Preferably, described determine unit, including: obtain subelement, detection sub-unit and first determines subelement;
Described acquisition subelement, for obtaining the zone position information of each viewing area on described touch control terminal;
Described detection sub-unit, each display on the positional information according to described extraneous touch-control and described touch control terminal The zone position information in region, detects described extraneous touch-control one by one and closes with the position of each viewing area on described touch control terminal System;
Described first determines subelement, for detecting that described extraneous touch-control falls in the first display when described detection sub-unit Time in region, it is determined that described first viewing area is described participle region, described first viewing area is described touch control terminal A upper viewing area.
Preferably, described display unit, including: generate subelement, display subelement and add subelement;
Described generation subelement, is used for generating participle display interface and at least one view control;
Described interpolation subelement, for being added separately to each word in described word segmentation result in a view control;
Described display subelement, for showing whole view control on described participle display interface.
Preferably, described participle unit, including: judgment sub-unit, participle subelement and second determine subelement;
Described judgment sub-unit, for judging that whether the character quantity of described first text is more than preset value;
When described judgment sub-unit, described participle subelement, for judging that the character quantity of described first text is not more than institute When stating preset value, words whole in described first text are carried out participle, obtains described word segmentation result;
Described second determines subelement, for judging that the character quantity of described first text is more than when described judgment sub-unit During described preset value, according to the positional information of described extraneous touch-control, determine that the second text, described first text include described second In whole words in text, and described second text, the quantity of character is equal to described preset value;
Described participle subelement, is additionally operable to when described second determines that subelement determines described second text, to described In two texts, whole words carry out participle, obtain described word segmentation result.
Preferably, also include: receive unit and operating unit;
Described reception unit, for receiving the Keyword Selection instruction that user triggers, the instruction of described Keyword Selection is root Send according to described word segmentation result;
Described acquiring unit, is additionally operable to instruct according to described Keyword Selection, obtains described use from described word segmentation result The key word that family selects;
Described display unit, is additionally operable to show described key word;
Described reception unit, is additionally operable to receive the key word operational order that described user triggers, and the operation of described key word refers to Order carries action type, and described action type includes searching for and sharing;
Described operating unit, for according to described action type, operates described key word.
The embodiment of the present invention additionally provides a kind of text handling method, including:
Show character area on the touchscreen;
In response to the extraneous touch-control to described character area, the word in described character area is carried out participle;
Show described word segmentation result.
Preferably, the described word segmentation result of described display, specifically include:
Generating participle display interface, described participle display interface includes at least one sub-view;
A word in described word segmentation result is shown respectively in every sub-view.
The embodiment of the present invention additionally provides a kind of text processing apparatus, including: character area display unit, participle unit and Word segmentation result display unit;
Described character area display unit, for showing character area on the touchscreen;
Described participle unit, in response to the extraneous touch-control to described character area, asks described character area certainly Oneself carries out participle;
Described word segmentation result display unit, is used for showing described word segmentation result.
Preferably, described word segmentation result display unit, including: generate subelement and display subelement;
Described generation subelement, is used for generating participle display interface, and described participle display interface includes that at least one son regards Figure;
Described display subelement, for showing a word in described word segmentation result respectively in every sub-view.
Compared with prior art, the present invention at least has the advantage that
The text handling method of embodiment of the present invention offer and device, after the extraneous touch-control that touch control terminal senses, obtain Take the positional information of extraneous touch-control.According to the positional information of extraneous touch-control, determine participle region.This participle region is that user need to enter Region belonging to the key word of single stepping.Afterwards, the word in participle region is identified, after obtaining the first text, then to first Text carries out participle, obtains word segmentation result.Now, word segmentation result is shown, in order to user select from word segmentation result one or Next step operation is carried out after multiple key words.From the foregoing, it will be observed that the text handling method of embodiment of the present invention offer and device, will touch Control sensing, Text region and participle combine, the word obtained in extraneous region indicated by touch-control of efficient quick, word and Name entities etc., facilitate user to directly select the key word in text, it is not necessary to user additionally inputs key word when subsequent operation, Improve the efficiency of operation.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present application or technical scheme of the prior art, below will be to embodiment or existing In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is only this Some embodiments described in application, for those of ordinary skill in the art, on the premise of not paying creative work, Other accompanying drawing can also be obtained according to these accompanying drawings.
The flow chart of the text handling method embodiment one that Fig. 1 provides for the present invention;
The flow chart of the text handling method embodiment two that Fig. 2 provides for the present invention;
The flow chart of the text handling method embodiment three that Fig. 3 provides for the present invention;
The structure chart of the text processing apparatus embodiment one that Fig. 4 provides for the present invention;
The structure chart of the text processing apparatus embodiment two that Fig. 5 provides for the present invention;
The structure chart of the text processing apparatus embodiment three that Fig. 6 provides for the present invention;
Extraneous touch area and participle region in text handling method that Fig. 7 (a) provides for the embodiment of the present invention and device Schematic diagram;
Text handling method that Fig. 7 (b) provides for the embodiment of the present invention and device show the schematic diagram of word segmentation result;
Text handling method that Fig. 8 (a)-Fig. 8 (c) provides for the embodiment of the present invention and device show showing of word segmentation result It is intended to;
To in word segmentation result in text handling method that Fig. 9 (a) and Fig. 9 (b) provide for the embodiment of the present invention and device Schematic diagram when key word operates;
The schematic flow sheet of the text handling method embodiment four that Figure 10 provides for the present invention;
The structural representation of the text processing apparatus embodiment four that Figure 11 provides for the present invention.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with in the embodiment of the present invention Accompanying drawing, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only this Invent a part of embodiment rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art exist Do not make the every other embodiment obtained under creative work premise, broadly fall into the scope of protection of the invention.
Understandable, below in conjunction with the accompanying drawings to the present invention for enabling the above-mentioned purpose of the present invention, feature and advantage to become apparent from Detailed description of the invention be described in detail.
Before introducing the detailed description of the invention of the present invention, first introduce relevant to the specific embodiment of the invention multiple Technical term.
Pressure touch: after touch control terminal experiences ambient pressure such as touch-screen mobile phone, system can capture pressure information.
Participle: refer to be cut into a word sequence one by one individually word, word or name entity etc..Participle Process is exactly according to certain specification, and continuous print word sequence is reassembled into single word, word or name entity sequence Process.
Name entity: refer to name, mechanism's name, place name and other all entities with entitled mark.The most real Body also includes numeral, date, currency, address etc..
Key word: word fragment interested to user in one section of complete sentence.
Also, it should be noted the text handling method of embodiment of the present invention offer and device run any at touch control terminal All can realize in the case of application program.Described application program include but be not limited to note, web page browsing, real-time communication and other There is the program etc. of text importing function.
Embodiment of the method one:
Seeing Fig. 1, this figure is the flow chart of the text handling method embodiment one that the present invention provides.
The text handling method that the present embodiment provides, including:
S101: the extraneous touch-control sensed in response to touch control terminal, obtains the positional information of described extraneous touch-control;
It is understood that touch control terminal can be arbitrary equipment with touch sensing function, include but not limited to hands Machine and panel computer etc..Its extraneous touch-control sensed includes and is not limited to single-point or multiple spot pressing, single-point or multiple spot slide and grasp Work, single-point or multiple point touching, single-point or multipoint pressure touch-control and touch area sensing etc..When external operation meets corresponding sensing During threshold value, touch control terminal can sense corresponding external operation.
It should be noted that when touch control terminal senses extraneous touch-control, extraneous touch-control can be got on touch control terminal Positional information, such as coordinate etc..The positional information of the described extraneous touch-control got is that this external world's touch-control is on touch control terminal Coordinate (is typically made up of X-axis coordinate and Y-axis coordinate).Now, system may recognize that the position that extraneous touch-control applies, in order to touching Control region carries out various corresponding operation.
S102: according to the positional information of described extraneous touch-control, determine participle region;
It is understood that described participle region can be according to the currently displaying view of touch control terminal and the position of extraneous touch-control Information determines.Typically, touch control terminal system includes picture display module and text importing module.Text importing module includes again Multiple different text importing submodules, such as note textbox etc..Each display module and textbox etc. are carried out by area coordinate Divide.System is typically preserved the positional information of each module on touch control terminal.When sensing extraneous touch-control, get extraneous touching After the positional information of control, may recognize which display module that extraneous touch-control is positioned in system on this basis, this can be shown Show that module determines participle region.
Below as a example by showing note application picture on touch control terminal, it is discussed in detail in the present embodiment and determines participle region Process.It is merely illustrative it is understood that following, the present invention is not done any restriction.
As shown in Fig. 7 (a), extraneous touch-control falls on touch control terminal the circular region enclosed at finger.Now, system can obtain Get out the positional information of extraneous touch-control, and according to the positional information of extraneous touch-control, determine that participle region is the short of finger place The region that letter text box is divided.
Additionally, when running a full screen display application, during such as webpage, reader etc., touch control terminal in touch control terminal system Upper each viewing area there is no the lines of actual displayed and distinguishes.Now, the region that screen shows can be divided into picture viewing area and literary composition Word viewing area, each region is also by represented by picture display module and text importing module, and is divided by positional information. When extraneous touch-control is in text importing region, on touch control terminal, the region of whole display word is participle region.
There is also the need to explanation, be only singly to refer to that pressing illustrates how as a example by activating extraneous touch-control shown in Fig. 7 (a) Determining participle region, the present invention does not limits how activating extraneous touch-control, and those skilled in the art can be according to practical situation Concrete setting, detailed description of the invention is similar to the above, and this is no longer going to repeat them.
S103: identify the word in described participle region, obtain the first text;
It is understood that when system is preserved the content information of a display module, be directly obtained participle region The representative word content in text importing module i.e. can get the first text.As a example by note shows, the first text is short The set of whole words in letter frame;As a example by reader, the first text is the set of now whole words that screen demonstrates.
S104: the word in described first text is carried out participle, obtains word segmentation result;
S105: show described word segmentation result.
After first text is carried out participle, obtain multiple word, word, phrase and name entity etc., i.e. word segmentation result, such as figure Shown in 7 (b).Here it should be noted that, natural language algorithm can be used first according to the concrete semanteme of word in the first text Text carries out participle, and concrete segmenting method and process repeat no more here.
After the text participle in participle region, word segmentation result is shown.So, user just directly can select from word segmentation result Select the key word needing operation further, it is simple to the key word in text is directly operated by user.
It should be noted that as shown in Fig. 8 (a)-Fig. 8 (c), user can select from word segmentation result one or more word, Word or name entity, also may select continuous or discrete word, word or name entity.Additionally, the shape of display word segmentation result Formula can be also other display modes as shown in Figure 8, shows as created window near extraneous touch area.This area skill When art personnel can specifically set mode and the display word segmentation result of display word segmentation result according to practical situation, the arrangement of each word is suitable Sequence.Between each word in word segmentation result, word or name entity can as shown in Fig. 8 (a)-Fig. 8 (c) between there is spacing, it is possible to There is not spacing only to be separated by line.When word segmentation result is too much, it is impossible to ensure user's energy in the case of showing word segmentation result completely When telling display content, can once show but slide and show user, it is possible to be shown to user by several times.
The text handling method that the present embodiment provides, after the extraneous touch-control that touch control terminal senses, obtains extraneous touch-control Positional information.According to the positional information of extraneous touch-control, determine participle region.This participle region is that user need to operate further Region belonging to key word.Afterwards, identify the word in participle region, after obtaining the first text, then the first text is carried out point Word, obtains word segmentation result.Now, word segmentation result is shown, in order to user selects one or more key word from word segmentation result After carry out next step operation.From the foregoing, it will be observed that the present embodiment provide text handling method, by touch-control sensing, Text region and Participle combines, word, word and the name entity etc. that obtain in extraneous region indicated by touch-control of efficient quick, facilitates user Directly select the key word in text, it is not necessary to user additionally inputs key word when subsequent operation, improves the efficiency of operation.
Embodiment of the method two:
Seeing Fig. 2, this figure is the flow chart of the text handling method embodiment two that the present invention provides.Compared to Fig. 1, this reality Execute example and provide a kind of more specific text handling method.
S201 in the present embodiment is identical with the S101 in embodiment of the method one, does not repeats them here.
The text handling method that the present embodiment provides, also includes:
S202: obtain the zone position information of each viewing area on described touch control terminal;
It is understood that the position of each viewing area can be according to operation change on touch control terminal.Therefore, correct for ensureing Obtain the word in user region interested, should to sense extraneous touch-control time touch control terminal on the position of each viewing area Determine participle region.
S203: according to the regional location of each viewing area on the positional information of described extraneous touch-control and described touch control terminal Information, detects described extraneous touch-control and the position relationship of each viewing area on described touch control terminal one by one;
S204: when described extraneous touch-control falls in the first viewing area, it is determined that described first viewing area is described Participle region, described first viewing area is a viewing area on described touch control terminal.
As a example by coordinate, on touch control terminal, the zone position information of each viewing area is a coordinate range region.Work as acquisition After the coordinate of extraneous touch-control, can determine whether out that extraneous touch-control falls in the coordinate range of which viewing area on touch control terminal. Extraneous viewing area i.e. participle region belonging to touch-control.
Additionally, due to content shown on touch control terminal can include word incessantly, it can be also picture etc..And it is obvious It is that the text handling method that the present invention provides is the operation carrying out word.Therefore, the text-processing side that the present embodiment provides Method, also includes the step judging whether include word in described participle region.If participle region does not include word, then terminate Text-processing process;If participle region includes word, then perform step S205.
S205: judge that whether the character quantity of described first text is more than preset value;If it is not, then perform step S206; If it is, perform step S207.
It should be noted that preset value can be 100 or 200, also can the most specifically set or obtain, at this not Enumerate again.
S206: words whole in described first text are carried out participle, obtains described word segmentation result;
The word quantity included due to text-processing region may be too much.During practical operation, by the institute in the first text There is word all to carry out participle operation, too much word segmentation result can be obtained, it has not been convenient to user therefrom selects key word.Therefore, in order to Improving participle efficiency, facilitate user to select key word from word segmentation result, the text handling method that the present embodiment provides also can root According to practical situation, a part of word in participle region is carried out participle, make user from the word segmentation result of this part of word Select key word, improve the interactive experience of user.
Now, the text handling method that the present embodiment provides, also include:
S207: according to the positional information of described extraneous touch-control, determines the second text, and to the most civilian in described second text Word carries out participle, obtains described word segmentation result, and described first text includes the whole words in described second text, and described In two texts, the quantity of character is equal to described preset value.
It should be noted that when obtaining the positional information of extraneous touch-control, can learn that extraneous touch-control is in the first text Which character near.Now, quantity near extraneous touch-control can be obtained, generate equal to the character of preset value according to preset rules Second text, and the second text is carried out participle, obtain word segmentation result.Such as, 50 or 100 near pressure power induction region Word generates the second text.These 50 or 100 words, can take forward the word of half quantity at extraneous touch-control, and from extraneous touch-control Place takes the word of half quantity backward.Those skilled in the art can also be according to practical situation, and concrete setting obtains the second text Mode, will not enumerate at this.
Embodiment of the method three:
Seeing Fig. 3, this figure is the flow chart of the text handling method embodiment three that the present invention provides.Compared to Fig. 1, this reality Execute example and provide a kind of more specific text handling method.
The S301-S304 in the present embodiment S101-S104 in embodiment of the method one is identical, the most superfluous at this State.
It is understood that the word segmentation result obtained in above-described embodiment can display it to user in the window.Further, Optional this Keyword Selection window of closing of user terminates Keyword Selection process.
The text handling method that the present embodiment provides, also includes:
S305: generate participle display interface and at least one view control;
S306: each word in described word segmentation result is added separately in a view control, and show at described participle Show and on interface, show whole view control.
It should be noted that Fig. 8 (a)-Fig. 8 (c) show a kind of participle display interface implement form.Participle shows Showing that in interface, each rectangular blocks is the view control demonstrated, each view control is for showing in word segmentation result Individual word (word, word or name entity).The display size of each view control, display position etc. specifically can set according to practical situation Fixed.Such as, can be by each view control scatter display in participle display interface, to facilitate user therefrom to select key word.With Time, those skilled in the art also can according to demand, use the word in the display word segmentation result such as different color, font or sizes, Word or name entity, as used the numeral in different display effects display word segmentation result or the high word of user's select probability.When When user selects keyword from word segmentation result, corresponding view control in participle display interface can be clicked directly on.
It addition, participle display interface or window include a close key, " X " key as shown in the lower left corner in Fig. 8 (c), use Participle display interface or window are closed by clicking on this close key in family.
The text handling method that the present embodiment provides, also includes:
S307: receiving the Keyword Selection instruction that user triggers, the instruction of described Keyword Selection is to tie according to described participle Fruit sends;
It should be noted that as shown in Fig. 8 (a)-Fig. 8 (c), user can by clicking on one or more view control, from Word segmentation result select one or more word or name entity also may select continuous or discrete word or name entity.
S308: instruct according to described Keyword Selection, obtains the key word that described user selects from described word segmentation result;
S309: show described key word;
As shown in Fig. 9 (a) and Fig. 9 (b), user, by clicking on the word in word segmentation result or name entity, triggers key Word selection instructs.After receiving Keyword Selection instruction, by the key selected by user, this highlights (such as, highlighted aobvious Show, change this view control or the color of word, change font etc.), in order to user carries out subsequent operation to this key word.
S310: receiving the key word operational order that described user triggers, described key word operational order carries operation class Type, described action type includes searching for and sharing;
S311: according to described action type, described key word is operated.
After user selects some or multiple view control, generate corresponding with various action types in relevant position Operation button.Then, user, by the operation button near Key Words, triggers the key word accordingly for this key word Operational order, different operation buttons represents different action types.Afterwards, according to this action type, user can be selected Key word operates.This operation includes but not limited to search for and share.Fig. 9 (a) for scanning for operation to key word A kind of example, Fig. 9 (b) is a kind of example that key word is shared operation.Including to the single word in word segmentation result, word Or name entity and multiple word, word or name entity operate.
It should be strongly noted that Fig. 7, Fig. 8 and Fig. 9 present invention of understanding merely for convenience provides in above-described embodiment The citing of text handling method, be not limitation of the invention.Those skilled in the art specifically can set according to practical situation Determining the detailed description of the invention of the text handling method that the present invention provides, this is no longer going to repeat them.
The text handling method provided based on above-described embodiment, the embodiment of the present invention additionally provides a kind of text-processing dress Put.
Device embodiment one:
Seeing Fig. 4, this figure is the structure chart of the text processing apparatus embodiment one that the present invention provides.
The text processing apparatus that the present embodiment provides, including: acquiring unit 100, determine unit 200, recognition unit 300, Participle unit 400 and display unit 500;
Described acquiring unit 100, for the extraneous touch-control sensed in response to touch control terminal, obtains described extraneous touch-control Positional information;
It is understood that touch control terminal can be arbitrary equipment with touch sensing function, include but not limited to hands Machine and panel computer etc..
Described determine unit 200, for the positional information according to described extraneous touch-control, determine participle region;
Described recognition unit 300, for identifying the word in described participle region, obtains the first text;
Described participle unit 400, for the word in described first text is carried out participle, obtains word segmentation result;
Described display unit 500, is used for showing described word segmentation result.
The text processing apparatus that the present embodiment provides, after the extraneous touch-control that touch control terminal senses, acquiring unit obtains The positional information of extraneous touch-control.Determine the unit positional information according to extraneous touch-control, determine participle region.This participle region is for using The region belonging to key word that family need to operate further.Afterwards, the word in recognition unit identification participle region, obtain the first literary composition After Ben, participle unit carries out participle to the first text again, obtains word segmentation result.Now, word segmentation result is shown by display unit, with Just user carries out next step operation after selecting key word from word segmentation result.From the foregoing, it will be observed that the text-processing that the present embodiment provides Device, combines touch-control sensing, Text region and participle, obtaining in extraneous region indicated by touch-control of efficient quick Word, word and name entity etc., facilitate user to directly select the key word in text, it is not necessary to user is the most defeated when subsequent operation Enter key word, improve the efficiency of operation.
Device embodiment two:
Seeing Fig. 5, this figure is the structure chart of the text processing apparatus embodiment two that the present invention provides.Compared to Fig. 4, this reality Execute example and provide a kind of more specific text processing apparatus.
In the text processing apparatus that the present embodiment provides, described determine unit, including: obtain subelement 201, detection single Unit 202 and first determines subelement 203;
Described acquisition subelement 201, for obtaining the zone position information of each viewing area on described touch control terminal;
Described detection sub-unit 202, each on the positional information according to described extraneous touch-control and described touch control terminal The zone position information of viewing area, detects described extraneous touch-control and the position of each viewing area on described touch control terminal one by one Relation;
Described first determines subelement 203, for detecting that described extraneous touch-control falls the when described detection sub-unit 202 Time in one viewing area, it is determined that described first viewing area is described participle region, described first viewing area is described touching A viewing area on control terminal.
In the text processing apparatus that the present embodiment provides, described participle unit, including: judgment sub-unit 401, participle list Unit 402 and second determines subelement 403;
Described judgment sub-unit 401, for judging that whether the character quantity of described first text is more than preset value;
Described participle subelement 402, for judging the character quantity of described first text not when described judgment sub-unit 401 During more than described preset value, words whole in described first text are carried out participle, obtains described word segmentation result;
Described second determines subelement 403, for judging the number of characters of described first text when described judgment sub-unit 401 When amount is more than described preset value, according to the positional information of described extraneous touch-control, determine that the second text, described first text include institute State the quantity of character in the whole words in the second text, and described second text and be equal to described preset value;
Described participle subelement 402, is additionally operable to when described second determines that subelement 403 determines described second text, right In described second text, whole words carry out participle, obtain described word segmentation result.
Device embodiment three:
Seeing Fig. 6, this figure is the structure chart of the text processing apparatus embodiment three that the present invention provides.
In the text processing apparatus that the present embodiment provides, described display unit, including: generate subelement 501, display son list Unit 502 and interpolation subelement 503;
Described generation subelement 501, is used for generating participle display interface and at least one view control;
Described interpolation subelement 503, for being added separately to a view control by each word in described word segmentation result In;
Described display subelement 502, for showing whole view control on described participle display interface.
The text processing apparatus that the present embodiment provides, also includes: receive unit 600 and operating unit 700;
Described reception unit 600, for receiving the Keyword Selection instruction that user triggers, the instruction of described Keyword Selection is Send according to described word segmentation result;
Described acquiring unit 100, is additionally operable to instruct according to described Keyword Selection, obtains described from described word segmentation result The key word that user selects;
Described display unit 500, is additionally operable to show described key word;
Described reception unit 600, is additionally operable to receive the key word operational order that described user triggers, and described key word operates Instruction carries action type, and described action type includes searching for and sharing;
Described operating unit 700, for according to described action type, operates described key word.
Embodiment of the method four:
Seeing Figure 10, this figure is the schematic flow sheet of the text handling method embodiment four that the present invention provides.
It should be noted that the text handling method that the present embodiment provides can apply to client, this client is for appointing One equipment with touch sensing function, includes but not limited to mobile phone and panel computer etc..
The text handling method that the present embodiment provides, including:
S1001: show character area on the touchscreen;
It is understood that the display screen that touch screen is the display device in client, such as mobile phone is touch screen.Visitor End subregion on the touchscreen in family shows different types of content, it may include one or more character areas and one or more Picture region etc..
S1002: in response to the extraneous touch-control to described character area, the word in described character area is carried out participle;
By the sensing user's extraneous touch-control to corresponding character area, it may be determined that need to be to any part viewing area on touch screen In content operate.Extraneous touch-control include and be not limited to single-point or multiple spot pressing, single-point or multiple spot slide, single-point or Multiple point touching, single-point or multipoint pressure touch-control and touch area sensing etc..When external operation meets corresponding sensing threshold value, visitor Family end can sense corresponding extraneous touch-control.After word in text filed is carried out participle, obtain multiple word, word, short Language and name entity etc., i.e. word segmentation result.Word during character recognition technology acquisition can be used text filed.
S1003: show described word segmentation result.
After the text participle in participle region, word segmentation result is shown.So, user just directly can select from word segmentation result Select the key word needing operation further, it is simple to the key word in text is directly operated by user.
Specific operation process can be found in Fig. 7-Fig. 8, repeats no more here.It should be noted that Fig. 7-Fig. 8 is only exemplary Illustrate, the present invention is not done any restriction.
Will be exemplified below specifically how showing described word segmentation result.It is understood that those skilled in the art are also Can specifically set the display mode of word segmentation result according to practical situation, will not enumerate at this.
Generating participle display interface, described participle display interface includes at least one sub-view;
A word in described word segmentation result is shown respectively in every sub-view.
It should be noted that Fig. 8 (a)-Fig. 8 (c) show a kind of participle display interface implement form.Participle shows Showing that in interface, each rectangular blocks is the sub-view demonstrated, every sub-view is for showing a word in word segmentation result (word, word or name entity).The display size of every sub-view, display position etc. specifically can set according to practical situation.Example As, the display size of sub-view can need to show that the number of word and font, font size are foundation, and are disperseed by every sub-view Display is in participle display interface, to facilitate user therefrom to select key word.Meanwhile, those skilled in the art also can be according to need Ask, use the word in the display word segmentation result such as different color, font or sizes, word or name entity, as used difference aobvious Show the numeral in effect display word segmentation result or the high word of user's select probability.When user selects keyword from word segmentation result Time, corresponding sub-view in participle display interface can be clicked directly on.
The text handling method that the present embodiment provides, after showing character area on the touchscreen, when sensing described literary composition After the extraneous touch-control in territory, block, i.e. the word in character area is carried out participle.Afterwards, word segmentation result is shown, in order to user Next step operation is carried out after selecting one or more key word from word segmentation result.The text handling method that the present embodiment provides, Touch-control sensing, Text region and participle are combined, the word obtained in extraneous region indicated by touch-control of efficient quick, word Language and name entity etc., facilitate user to directly select the key word in text, it is not necessary to user additionally inputs pass when subsequent operation Keyword, improves the efficiency of operation.
The text handling method provided based on above example, the embodiment of the present invention additionally provides a kind of text-processing dress Put.
Device embodiment four:
Seeing Figure 11, this figure is the structural representation of the text processing apparatus embodiment four that the present invention provides.
The text processing apparatus that the present embodiment provides, including: character area display unit 10, participle unit 20 and participle knot Really display unit 30;
Described character area display unit 10, for showing character area on the touchscreen;
Described participle unit 20, in response to the extraneous touch-control to described character area, asks described character area Oneself carries out participle;
Described word segmentation result display unit 30, is used for showing described word segmentation result.
Will be exemplified below how word segmentation result display unit 30 specifically shows described word segmentation result.May be appreciated It is that those skilled in the art also can specifically set the display mode of word segmentation result according to practical situation, will not enumerate at this.
Described word segmentation result display unit 30, including: generate subelement 31 and display subelement 32;
Described generation subelement 31, is used for generating participle display interface, and described participle display interface includes at least one son View;
Described display subelement 32, for showing a word in described word segmentation result respectively in every sub-view.
The text processing apparatus that the present embodiment provides, after character area display unit shows character area on the touchscreen, When after the extraneous touch-control that participle unit senses described character area, i.e. the word in character area is carried out participle.Afterwards, Word segmentation result is shown by word segmentation result display unit, in order to user is carried out after selecting one or more key word from word segmentation result Next step operation.The text processing apparatus that the present embodiment provides, combines touch-control sensing, Text region and participle, efficiently Obtain word, word and the name entity etc. in extraneous region indicated by touch-control efficiently, facilitate user to directly select in text Key word, it is not necessary to user additionally inputs key word when subsequent operation, improves the efficiency of operation.
It should be noted that each embodiment uses the mode gone forward one by one to describe in this specification, each embodiment emphasis is said Bright is all the difference with other embodiments, and between each embodiment, identical similar portion sees mutually.For reality For executing device disclosed in example, owing to it corresponds to the method disclosed in Example, so describe is fairly simple, relevant part The method part of seeing illustrates.
Also, it should be noted in this article, the relational terms of such as first and second or the like is used merely to one Entity or operation separate with another entity or operating space, and not necessarily require or imply between these entities or operation There is relation or the order of any this reality.And, term " includes ", " comprising " or its any other variant are intended to contain Comprising of lid nonexcludability, so that include that the process of a series of key element, method, article or equipment not only include that those are wanted Element, but also include other key elements being not expressly set out, or also include for this process, method, article or equipment Intrinsic key element.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that Including process, method, article or the equipment of described key element there is also other identical element.
The method described in conjunction with the embodiments described herein or the step of algorithm can direct hardware, processor be held The software module of row, or the combination of the two implements.Software module can be placed in random access memory (RAM), internal memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, depositor, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
The above, be only presently preferred embodiments of the present invention, and the present invention not makees any pro forma restriction.Though So the present invention is disclosed above with preferred embodiment, but is not limited to the present invention.Any it is familiar with those skilled in the art Member, without departing under technical solution of the present invention ambit, may utilize the method for the disclosure above and technology contents to the present invention Technical scheme makes many possible variations and modification, or is revised as the Equivalent embodiments of equivalent variations.Therefore, every without departing from The content of technical solution of the present invention, the technical spirit of the foundation present invention is to any simple modification made for any of the above embodiments, equivalent Change and modification, all still fall within the range of technical solution of the present invention protection.

Claims (14)

1. a text handling method, it is characterised in that including:
The extraneous touch-control sensed in response to touch control terminal, obtains the positional information of described extraneous touch-control;
According to the positional information of described extraneous touch-control, determine participle region;
Identify the word in described participle region, obtain the first text;
Word in described first text is carried out participle, obtains word segmentation result;
Show described word segmentation result.
Text handling method the most according to claim 1, it is characterised in that the described position letter according to described extraneous touch-control Breath, determines participle region, specifically includes:
Obtain the zone position information of each viewing area on described touch control terminal;
The zone position information of each viewing area on positional information according to described extraneous touch-control and described touch control terminal, one by one Detect described extraneous touch-control and the position relationship of each viewing area on described touch control terminal;
When described extraneous touch-control falls in the first viewing area, it is determined that described first viewing area is described participle region, Described first viewing area is a viewing area on described touch control terminal.
Text handling method the most according to claim 1, it is characterised in that the described word segmentation result of described display, specifically wraps Include:
Generate participle display interface and at least one view control;
Each word in described word segmentation result is added separately in a view control;
Described participle display interface shows whole view control.
Text handling method the most according to claim 1, it is characterised in that described word in described first text is entered Row participle, obtains word segmentation result, specifically includes:
Judge that whether the character quantity of described first text is more than preset value;
If it is not, then words whole in described first text are carried out participle, obtain described word segmentation result;
If it is, according to the positional information of described extraneous touch-control, determine the second text, and to the most civilian in described second text Word carries out participle, obtains described word segmentation result, and described first text includes the whole words in described second text, and described In two texts, the quantity of character is equal to described preset value.
5. according to the text handling method described in any one of Claims 1-4, it is characterised in that the described participle of described display is tied Really, the most also include:
Receiving the Keyword Selection instruction that user triggers, the instruction of described Keyword Selection sends according to described word segmentation result;
Instruct according to described Keyword Selection, from described word segmentation result, obtain the key word that described user selects;
Show described key word;
Receiving the key word operational order that described user triggers, described key word operational order carries action type, described behaviour Include searching for and sharing as type;
According to described action type, described key word is operated.
6. a text processing apparatus, it is characterised in that including: acquiring unit, determine unit, recognition unit, participle unit and Display unit;
Described acquiring unit, for the extraneous touch-control sensed in response to touch control terminal, obtains the position letter of described extraneous touch-control Breath;
Described determine unit, for the positional information according to described extraneous touch-control, determine participle region;
Described recognition unit, for identifying the word in described participle region, obtains the first text;
Described participle unit, for the word in described first text is carried out participle, obtains word segmentation result;
Described display unit, is used for showing described word segmentation result.
Text processing apparatus the most according to claim 6, it is characterised in that described determine unit, including: obtain son single Unit, detection sub-unit and first determine subelement;
Described acquisition subelement, for obtaining the zone position information of each viewing area on described touch control terminal;
Described detection sub-unit, each viewing area on the positional information according to described extraneous touch-control and described touch control terminal Zone position information, detect described extraneous touch-control and the position relationship of each viewing area on described touch control terminal one by one;
Described first determines subelement, for detecting that described extraneous touch-control falls in the first viewing area when described detection sub-unit Time interior, it is determined that described first viewing area is described participle region, described first viewing area is on described touch control terminal one Viewing area.
Text processing apparatus the most according to claim 6, it is characterised in that described display unit, including: generate son single Unit, display subelement and interpolation subelement;
Described generation subelement, is used for generating participle display interface and at least one view control;
Described interpolation subelement, for being added separately to each word in described word segmentation result in a view control;
Described display subelement, for showing whole view control on described participle display interface.
Text processing apparatus the most according to claim 6, it is characterised in that described participle unit, including: judge that son is single Unit, participle subelement and second determine subelement;
Described judgment sub-unit, for judging that whether the character quantity of described first text is more than preset value;
Described participle subelement, for when described judgment sub-unit, to judge that the character quantity of described first text is not more than described pre- If during value, words whole in described first text being carried out participle, obtaining described word segmentation result;
Described second determines subelement, for judging that when described judgment sub-unit the character quantity of described first text is more than described During preset value, according to the positional information of described extraneous touch-control, determine that the second text, described first text include described second text In whole words, and in described second text the quantity of character equal to described preset value;
Described participle subelement, is additionally operable to when described second determines that subelement determines described second text, to described second literary composition In Ben, whole words carry out participle, obtain described word segmentation result.
10. according to the text processing apparatus described in any one of claim 6 to 9, it is characterised in that also include: receive unit and Operating unit;
Described reception unit, for receiving the Keyword Selection instruction that user triggers, the instruction of described Keyword Selection is according to institute State what word segmentation result sent;
Described acquiring unit, is additionally operable to instruct according to described Keyword Selection, obtains described user choosing from described word segmentation result The key word selected;
Described display unit, is additionally operable to show described key word;
Described reception unit, is additionally operable to receive the key word operational order that described user triggers, and described key word operational order is taken With action type, described action type includes searching for and sharing;
Described operating unit, for according to described action type, operates described key word.
11. 1 kinds of text handling methods, it is characterised in that including:
Show character area on the touchscreen;
In response to the extraneous touch-control to described character area, the word in described character area is carried out participle;
Show described word segmentation result.
12. text handling methods according to claim 11, it is characterised in that the described word segmentation result of described display, specifically Including:
Generating participle display interface, described participle display interface includes at least one sub-view;
A word in described word segmentation result is shown respectively in every sub-view.
13. 1 kinds of text processing apparatus, it is characterised in that including: character area display unit, participle unit and word segmentation result are aobvious Show unit;
Described character area display unit, for showing character area on the touchscreen;
Described participle unit, in response to the extraneous touch-control to described character area, asks oneself to enter to described character area Row participle;
Described word segmentation result display unit, is used for showing described word segmentation result.
14. text processing apparatus according to claim 13, it is characterised in that described word segmentation result display unit, including: Generate subelement and display subelement;
Described generation subelement, is used for generating participle display interface, and described participle display interface includes at least one sub-view;
Described display subelement, for showing a word in described word segmentation result respectively in every sub-view.
CN201610681142.8A 2016-08-17 2016-08-17 Text processing method and device Active CN106325688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610681142.8A CN106325688B (en) 2016-08-17 2016-08-17 Text processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610681142.8A CN106325688B (en) 2016-08-17 2016-08-17 Text processing method and device

Publications (2)

Publication Number Publication Date
CN106325688A true CN106325688A (en) 2017-01-11
CN106325688B CN106325688B (en) 2020-01-14

Family

ID=57743921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610681142.8A Active CN106325688B (en) 2016-08-17 2016-08-17 Text processing method and device

Country Status (1)

Country Link
CN (1) CN106325688B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899495A (en) * 2017-03-06 2017-06-27 维沃移动通信有限公司 The meaning of a word method for inquiring and mobile terminal of a kind of communication information
CN106970899A (en) * 2017-05-09 2017-07-21 北京锤子数码科技有限公司 A kind of text handling method and device
CN107229403A (en) * 2017-05-27 2017-10-03 北京小米移动软件有限公司 A kind of information content system of selection and device
CN107423273A (en) * 2017-05-09 2017-12-01 北京锤子数码科技有限公司 A kind of method for editing text and device
CN107765970A (en) * 2017-03-27 2018-03-06 三角兽(北京)科技有限公司 Information processor and information processing method
CN108763193A (en) * 2018-04-18 2018-11-06 Oppo广东移动通信有限公司 Literal processing method, device, mobile terminal and storage medium
CN108958576A (en) * 2018-06-08 2018-12-07 Oppo广东移动通信有限公司 content identification method, device and mobile terminal
CN109426662A (en) * 2017-08-25 2019-03-05 阿里巴巴集团控股有限公司 Exchange method and equipment
CN109471539A (en) * 2018-10-23 2019-03-15 维沃移动通信有限公司 A kind of input content amending method and mobile terminal
CN109917988A (en) * 2017-12-13 2019-06-21 腾讯科技(深圳)有限公司 Choose content display method, device, terminal and computer readable storage medium
CN110166621A (en) * 2019-04-17 2019-08-23 维沃移动通信有限公司 A kind of literal processing method and terminal device
CN110890095A (en) * 2019-12-26 2020-03-17 北京大米未来科技有限公司 Voice detection method, recommendation method, device, storage medium and electronic equipment
CN116302841A (en) * 2023-04-13 2023-06-23 银川兴诚电子科技有限公司 Industrial Internet of things safety monitoring method and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609208A (en) * 2012-02-13 2012-07-25 广州市动景计算机科技有限公司 Method and system for word capture on screen of touch screen equipment, and touch screen equipment
CN102929924A (en) * 2012-09-20 2013-02-13 百度在线网络技术(北京)有限公司 Method and device for generating word selecting searching result based on browsing content
CN103472998A (en) * 2013-09-27 2013-12-25 小米科技有限责任公司 Method, device and terminal equipment for character selection
CN103744930A (en) * 2013-12-30 2014-04-23 宇龙计算机通信科技(深圳)有限公司 Method for viewing social records and mobile terminal thereof
WO2015000429A1 (en) * 2013-07-05 2015-01-08 腾讯科技(深圳)有限公司 Intelligent word selection method and device
CN104731797A (en) * 2013-12-19 2015-06-24 北京新媒传信科技有限公司 Keyword extracting method and keyword extracting device
CN106126052A (en) * 2016-06-23 2016-11-16 北京小米移动软件有限公司 Text selection method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609208A (en) * 2012-02-13 2012-07-25 广州市动景计算机科技有限公司 Method and system for word capture on screen of touch screen equipment, and touch screen equipment
CN102929924A (en) * 2012-09-20 2013-02-13 百度在线网络技术(北京)有限公司 Method and device for generating word selecting searching result based on browsing content
WO2015000429A1 (en) * 2013-07-05 2015-01-08 腾讯科技(深圳)有限公司 Intelligent word selection method and device
CN103472998A (en) * 2013-09-27 2013-12-25 小米科技有限责任公司 Method, device and terminal equipment for character selection
CN104731797A (en) * 2013-12-19 2015-06-24 北京新媒传信科技有限公司 Keyword extracting method and keyword extracting device
CN103744930A (en) * 2013-12-30 2014-04-23 宇龙计算机通信科技(深圳)有限公司 Method for viewing social records and mobile terminal thereof
CN106126052A (en) * 2016-06-23 2016-11-16 北京小米移动软件有限公司 Text selection method and device

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106899495A (en) * 2017-03-06 2017-06-27 维沃移动通信有限公司 The meaning of a word method for inquiring and mobile terminal of a kind of communication information
CN106899495B (en) * 2017-03-06 2019-02-15 维沃移动通信有限公司 A kind of meaning of a word method for inquiring and mobile terminal of the communication information
CN109857327A (en) * 2017-03-27 2019-06-07 三角兽(北京)科技有限公司 Information processing unit, information processing method and storage medium
US11086512B2 (en) 2017-03-27 2021-08-10 Tencent Technology (Shenzhen) Company Limited Information processing apparatus of displaying text with semantic segments thereof marked and determining and displaying candidate operations matching user intent corresponding to the text, information processing method thereof, and computer-readable storage medium
CN107765970A (en) * 2017-03-27 2018-03-06 三角兽(北京)科技有限公司 Information processor and information processing method
CN109885251A (en) * 2017-03-27 2019-06-14 三角兽(北京)科技有限公司 Information processing unit, information processing method and storage medium
CN107765970B (en) * 2017-03-27 2019-03-12 三角兽(北京)科技有限公司 Information processing unit, information processing method and storage medium
CN106970899A (en) * 2017-05-09 2017-07-21 北京锤子数码科技有限公司 A kind of text handling method and device
CN107423273A (en) * 2017-05-09 2017-12-01 北京锤子数码科技有限公司 A kind of method for editing text and device
CN107423273B (en) * 2017-05-09 2020-05-05 北京字节跳动网络技术有限公司 Text editing method and device
CN107229403B (en) * 2017-05-27 2020-09-15 北京小米移动软件有限公司 Information content selection method and device
CN107229403A (en) * 2017-05-27 2017-10-03 北京小米移动软件有限公司 A kind of information content system of selection and device
CN109426662A (en) * 2017-08-25 2019-03-05 阿里巴巴集团控股有限公司 Exchange method and equipment
CN109917988A (en) * 2017-12-13 2019-06-21 腾讯科技(深圳)有限公司 Choose content display method, device, terminal and computer readable storage medium
CN109917988B (en) * 2017-12-13 2021-12-21 腾讯科技(深圳)有限公司 Selected content display method, device, terminal and computer readable storage medium
CN108763193A (en) * 2018-04-18 2018-11-06 Oppo广东移动通信有限公司 Literal processing method, device, mobile terminal and storage medium
WO2019201109A1 (en) * 2018-04-18 2019-10-24 Oppo广东移动通信有限公司 Word processing method and apparatus, and mobile terminal and storage medium
CN108958576B (en) * 2018-06-08 2021-02-02 Oppo广东移动通信有限公司 Content identification method and device and mobile terminal
CN108958576A (en) * 2018-06-08 2018-12-07 Oppo广东移动通信有限公司 content identification method, device and mobile terminal
CN109471539A (en) * 2018-10-23 2019-03-15 维沃移动通信有限公司 A kind of input content amending method and mobile terminal
CN109471539B (en) * 2018-10-23 2023-06-06 维沃移动通信有限公司 Input content modification method and mobile terminal
CN110166621B (en) * 2019-04-17 2020-09-15 维沃移动通信有限公司 Word processing method and terminal equipment
CN110166621A (en) * 2019-04-17 2019-08-23 维沃移动通信有限公司 A kind of literal processing method and terminal device
CN110890095A (en) * 2019-12-26 2020-03-17 北京大米未来科技有限公司 Voice detection method, recommendation method, device, storage medium and electronic equipment
CN116302841A (en) * 2023-04-13 2023-06-23 银川兴诚电子科技有限公司 Industrial Internet of things safety monitoring method and system
CN116302841B (en) * 2023-04-13 2023-12-08 北京浩太同益科技发展有限公司 Industrial Internet of things safety monitoring method and system

Also Published As

Publication number Publication date
CN106325688B (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN106325688A (en) Text processing method and device
CN106484266A (en) A kind of text handling method and device
CN104731881B (en) A kind of chat record method and its mobile terminal based on communications applications
CN103389869B (en) A kind of method, apparatus and equipment for being adjusted to touch input interface
CN110189089A (en) Method, system and mobile device for communication
EP2871563A1 (en) Electronic device, method and storage medium
CN103218160A (en) Man-machine interaction method and terminal
US20140359538A1 (en) Systems and methods for moving display objects based on user gestures
EP2663913A2 (en) User interface interaction behavior based on insertion point
CN106527888A (en) Screen-sliding page searching method and device
CN104636434A (en) Search result processing method and device
EP2891041B1 (en) User interface apparatus in a user terminal and method for supporting the same
CN104571813B (en) A kind of display methods and device of information
CN111240669B (en) Interface generation method and device, electronic equipment and computer storage medium
CN104267879B (en) A kind of method and device of interface alternation
CN105893613B (en) image identification information searching method and device
CN104133815B (en) The method and system of input and search
CN113194024B (en) Information display method and device and electronic equipment
CN105335383A (en) Input information processing method and device
CN103020277B (en) A kind of search terms suggestions method and apparatus
CN104020853A (en) Kinect-based system and method for controlling network browser
CN107132927A (en) Input recognition methods and device and the device for identified input character of character
JP2018503917A (en) Method and apparatus for text search based on keywords
CN104965633B (en) A kind of method and apparatus that service jumps
CN106970899A (en) A kind of text handling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190118

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address before: Room 309 and 310, Building 3, 33 D, 99 Kechuang 14th Street, Daxing Economic and Technological Development Zone, Beijing, 100176

Applicant before: SMARTISAN DIGITAL Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.