CN111077978A - Point reading control method and terminal equipment - Google Patents
Point reading control method and terminal equipment Download PDFInfo
- Publication number
- CN111077978A CN111077978A CN201910499986.4A CN201910499986A CN111077978A CN 111077978 A CN111077978 A CN 111077978A CN 201910499986 A CN201910499986 A CN 201910499986A CN 111077978 A CN111077978 A CN 111077978A
- Authority
- CN
- China
- Prior art keywords
- user
- page
- area
- content
- reading
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0483—Interaction with page-structured environments, e.g. book metaphor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/04—Electrically-operated educational appliances with audible presentation of the material to be studied
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a method for determining a point-to-read region and a terminal device, which are applied to the technical field of terminal devices and can solve the problem of high power consumption of family education equipment in the prior art. The method comprises the following steps: detecting a first position of a user finger on a first page; if the first position is in the first delineation area, reporting and reading first content included in the first delineation area; detecting that a user's finger moves from a first position to a second position on a first page; determining that the second position is in the second delineation area, and judging whether the reading of the first content is finished; the second delineating area and the first delineating area are different delineating areas on the first page; if so, the second content included in the second delineation area is reported and read. The method is applied to a point-reading scene.
Description
Technical Field
The embodiment of the invention relates to the technical field of terminal equipment, in particular to a point reading control method and terminal equipment.
Background
At present, most family education equipment on the market mostly has a reading function, and when a student encounters an unknown literary composition in the learning process, the student can read the unknown literary composition by using the family education equipment with the reading function. Frequently, when a user reads a paper book with fingers, the fingers of the user may move at any time, and if the user moves the finger family education equipment each time and takes a picture, the picture is uploaded to a server to search and determine current point reading content and rerecord the determined point reading content, large power consumption of the family education equipment is caused.
Disclosure of Invention
The embodiment of the invention provides a point reading control method and terminal equipment, which are used for solving the problem of high power consumption of family education equipment in the prior art. In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, a method for determining a read-and-click area is provided, where the method includes:
detecting a first position of a user finger on a first page;
if the first position is in a first delineation area, reporting and reading first content included in the first delineation area;
detecting that a user's finger is moving from a first position to a second position on the first page;
determining that the second position is in a second delineation area, and judging whether the reading of the first content is finished; the second delineating area and the first delineating area are different delineating areas on the first page;
and if so, reading second content included in the second delineation area.
As an alternative implementation, in a first aspect of an embodiment of the invention,
after detecting that the user's finger moves from a first position to a second position on the first page, the method further comprises:
determining that the second location is still within the first delineation region;
judging whether the first content is completely read;
if yes, the first content is reported again.
As an alternative implementation, in a first aspect of an embodiment of the present invention,
after detecting that the user's finger moves from a first position to a second position on the first page, the method further comprises:
determining that the second location is still within the first delineation region;
judging whether the first content is completely read;
if yes, the first content is reported again.
As an alternative implementation, in a first aspect of an embodiment of the present invention,
the first page is an electronic page, and after detecting a first position of a finger of a user on the first page and before the reading of the first content included in the first delineation region, the method further includes:
acquiring fingerprint information of the user finger;
comparing the acquired fingerprint information of the user finger with the prestored fingerprint information;
if the fingerprint information of the user finger is matched with pre-stored first fingerprint information of the user, acquiring first voiceprint characteristics bound with the first fingerprint information, wherein different pre-stored different fingerprint information of the user are bound with different voiceprint characteristics;
the reading of the first content included in the first delineation area includes:
and adopting the first voiceprint characteristic to read the first content included in the first delineating area.
As an alternative implementation, in a first aspect of an embodiment of the present invention,
the first page is a paper page, and after a first position of a finger of a user on the first page is detected and before the first content included in the first delineation area is reported, the method further includes:
acquiring an image including facial feature information of a user;
acquiring facial feature information of the user from the image;
comparing the facial feature information of the user with pre-stored facial feature information;
if the facial feature information of the user is matched with first facial feature information stored in advance, acquiring a second voiceprint feature bound with the first facial feature information;
the reading of the first content included in the first delineation area includes:
and adopting the second acoustic line characteristic to read first content included in the first delineation area.
In a second aspect, a terminal device is provided, which includes:
the detection module is used for detecting a first position of a finger of a user on a first page;
the first reading module is used for reading first content included in a first delineation area if the first position is in the first delineation area;
the detection module is further used for detecting that the finger of the user moves from a first position to a second position on the first page;
the determining module is configured to determine that the second position is in a second delineation area;
the judging module is used for judging whether the first content is completely read; the second delineating area and the first delineating area are different delineating areas on the first page;
and the second reading module is used for reading the second content included in the second delineation area if the second content is contained in the second delineation area.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the determining module is further configured to determine that the second position is still in the first delineation area after detecting that the finger of the user moves from the first position to the second position on the first page;
the judging module is used for judging whether the first content is completely read;
and the first reporting module is used for reporting and reading the first content again if the first content is the first content.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the determining module is configured to determine whether the second location is in a non-outlining area in the first page before the determining module determines that the second location is in a second outlining area;
the determining module is further configured to determine the delineation area where the second position is located if the second position is not located.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the first page is an electronic page, and the terminal device further includes:
a fingerprint obtaining module, configured to obtain fingerprint information of a user finger after the detecting moachi detects a first position of the user finger on a first page and before the reading module reads first content included in the first delineation area;
the fingerprint comparison module is used for comparing the acquired fingerprint information of the user finger with the prestored fingerprint information;
the voiceprint acquisition module is used for acquiring a first voiceprint characteristic bound with first fingerprint information if the fingerprint information of the user finger is matched with the prestored first fingerprint information of the user, wherein different voiceprint characteristics are bound with the prestored different fingerprint information of the user;
the first reading module is specifically configured to read the first content included in the first delineation area by using the first voiceprint feature.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the first page is a paper page, and the terminal device further includes:
an image acquisition module for acquiring an image including facial feature information of a user; the facial feature acquisition module is used for acquiring facial feature information of the user from the image;
the face comparison module is used for comparing the facial feature information of the user with the pre-stored facial feature information;
the voiceprint feature acquisition module is used for acquiring a second voiceprint feature bound with first facial feature information if the facial feature information of the user is matched with the first facial feature information stored in advance;
the first reading module is specifically configured to read, by using the second voiceprint feature, the first content included in the first delineation area.
In a third aspect, a terminal device is provided, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the click-to-read control method in the first aspect of the embodiment of the present invention.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that causes a computer to execute the click-to-read control method in the first aspect of the embodiment of the present invention. The computer readable storage medium includes a ROM/RAM, a magnetic or optical disk, or the like.
In a fifth aspect, there is provided a computer program product for causing a computer to perform some or all of the steps of any one of the methods of the first aspect when the computer program product is run on the computer.
A sixth aspect provides an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the terminal equipment can detect the first position of the finger of the user on the first page; if the first position is in a first delineation area, reporting and reading first content included in the first delineation area; detecting that a user's finger is moving from a first position to a second position on the first page; determining that the second position is in a second delineation area, and judging whether the reading of the first content is finished; the second delineating area and the first delineating area are different delineating areas on the first page; and if so, reading second content included in the second delineation area. According to the scheme, after the user moves the finger, whether the position of the moved user finger exceeds the first hooking area where the user finger is located originally (before moving) can be judged, and after the position of the moved user finger is located in the new second hooking area and the first content included in the first hooking area is completely reported, the second content included in the second hooking area is determined to be reported and read by the user indication, and the second content is reported and read.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart illustrating a point-reading control method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart diagram of a point-reading control method according to an embodiment of the present invention;
fig. 3 is a third schematic flowchart of a touch reading control method according to an embodiment of the present invention;
fig. 4 is a fourth schematic flowchart of a touch reading control method according to an embodiment of the present invention;
fig. 5 is a fifth flowchart illustrating a touch reading control method according to an embodiment of the present invention;
fig. 6 is a first schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram three of a terminal device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first and second locations, etc. are for distinguishing between different locations and are not intended to describe a particular order of locations.
The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides a point reading control method and terminal equipment, which can reduce the times of false response in the point reading process, thereby reducing the power consumption of a mobile terminal.
The terminal device according to the embodiment of the present invention may be an electronic device such as a Mobile phone, a pointing and reading machine, a teaching machine, a tablet computer, a notebook computer, a palmtop computer, a vehicle-mounted terminal device, a wearable device, an Ultra-Mobile Personal computer (UMPC), a netbook, or a Personal Digital Assistant (PDA). The wearable device may be a smart watch, a smart bracelet, a watch phone, a smart foot ring, a smart earring, a smart necklace, a smart headset, or the like, and the embodiment of the present invention is not limited.
The execution main body of the click-to-read control method provided by the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the click-to-read control method in the terminal device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain the touch and talk control method provided by the embodiment of the present invention.
The click-to-read control method provided by the embodiment of the invention can be applied to a click-to-read scene.
Example one
As shown in fig. 1, an embodiment of the present invention provides a touch reading control method, which may include the following steps:
101. the terminal device detects a first position of a user's finger on the first page.
In the embodiment of the present invention, the touch reading mode of the terminal device may be divided into two types, one type is touch reading of an electronic page displayed on a display screen of the terminal device, and the other type is touch reading of a paper page.
The terminal device collects an image comprising a first page, identifies the content of the image, and determines a first position of a first user on the first page from the image.
As an optional implementation manner, the terminal device obtains a moving track of a finger of the user on the display screen through the sensor to determine the first position.
102. If the first position is in the first delineation area, the terminal device reads the first content included in the first delineation area.
In the embodiment of the present invention, one page may include one or more delineation areas, and each delineation area includes contents available for point reading.
As an optional implementation manner, in the case that the first page is a paper page, when the terminal device reads the first content, the terminal device may further display an electronic page corresponding to the first page, and highlight the first content in the electronic book page.
Through the optional implementation manner, under the condition that the first page is a paper page, the terminal device can display the electronic page corresponding to the first page and highlight the first content in the electronic book page, so that the user can clearly and visually see the first content in the process of reading the first content, and can see the electronic page where the first content is located, and the user experience is improved.
Further, as an optional implementation manner, after the electronic book page is displayed, the terminal device may further receive a click-to-read input of a user on the electronic book page for the third content, and in response to the click-to-read input, highlight the third content and report the third content. The third content may be content on the electronic book page other than the first content.
Through the optional implementation mode, after the user reads on the paper page once, the terminal device can display the corresponding electronic page, and the user can continue to read on the electronic page when reading next time, so that the switching between the paper page reading and the electronic page reading is realized, and the reading mode is more flexible.
103. The terminal device detects that the user's finger moves from a first position to a second position on the first page.
In this embodiment of the present invention, the second position may be a position obtained based on the first position and a movement trajectory of the user's finger.
The second position may also be obtained in the same manner as the first position, and will not be described herein again.
104. The terminal device determines that the second position is in the second delineation area.
105. The terminal equipment judges whether the first content is completely read.
The second delineation area and the first delineation area are different delineation areas on the first page.
106. And if so, the terminal equipment reports and reads the second content included in the second delineation area.
107. If not, the terminal equipment does not respond.
The embodiment of the invention provides a point reading control method.A terminal device can detect a first position of a finger of a user on a first page; if the first position is in a first delineation area, reporting and reading first content included in the first delineation area; detecting that a user's finger is moving from a first position to a second position on the first page; determining that the second position is in a second delineation area, and judging whether the reading of the first content is finished; the second delineating area and the first delineating area are different delineating areas on the first page; and if so, reading second content included in the second delineation area. According to the scheme, after the user moves the finger, whether the position of the moved user finger exceeds the first hooking area where the user finger is located originally (before moving) can be judged, and after the position of the moved user finger is located in the new second hooking area and the first content included in the first hooking area is completely reported, the second content included in the second hooking area is determined to be reported and read by the user indication, and the second content is reported and read.
Example two
As shown in fig. 2, the touch reading control method provided in the embodiment of the present invention may include the following steps:
201. the terminal device detects a first position of a user's finger on the first page.
202. If the first position is in the first delineation area, the terminal device reads the first content included in the first delineation area.
203. The terminal device detects that the user's finger moves from a first position to a second position on the first page.
For the descriptions 201 to 203, reference may be specifically made to the descriptions of 101 to 103 in the first embodiment, which are not described herein again.
204. The terminal device determines that the second position is still in the first delineation area.
205. The terminal equipment judges whether the first content is completely read.
206. If yes, the terminal equipment reads the first content again.
207. If not, the terminal equipment does not respond.
In this embodiment, when the terminal device is still located in the first delineation area at the second location, it may be considered that the user needs to repeatedly click and read the first content included in the first delineation area, and at this time, the first content included in the first delineation area may be reported again.
As an optional implementation manner, the click-to-read control method provided in the embodiment of the present invention may further include: under the condition that the number of times that the user triggers the terminal device to read the first content exceeds the preset number of times, the terminal device can bind the first content with the account information of the user currently online login, and send the first content and the account information of the user currently online login to the server (the server in response stores the binding relationship between the first content and the account information of the user currently online login), so that the server can push the first content to the terminal device after the user uses the account information again on the terminal device to perform online login.
Through the implementation manner, under the condition that the number of times that the user triggers the terminal device to read the first content exceeds the preset number of times, it is indicated that the first content which may be currently the content that the user needs to learn in a key point manner is the user, at this time, the first content and the account information which the user logs in online currently can be bound and sent to the server, and the first content can be quickly acquired after the user logs in online again on the terminal device by using the account information.
As an optional implementation manner, when the number of times that the user triggers the terminal device to read the first content exceeds the preset number of times, the terminal device detects whether the terminal device is connected to the wearable device, and when the terminal device is connected to the wearable device, the audio of the first content is sent to the wearable device, so that the wearable device can store the audio.
Through the optional implementation mode, under the condition that the number of times that the user triggers the terminal device to read the first content exceeds the preset number of times, it is described that the user may be the content that the user needs to learn with emphasis, at this moment, the audio frequency of the first content is sent to the wearable device, so that the user can conveniently listen to the audio frequency of the first content in the wearable device at any time, and more humanized and more flexible service is provided for the user.
EXAMPLE III
As shown in fig. 3, the touch reading control method provided in the embodiment of the present invention may include the following steps:
301. the terminal device detects a first position of a user's finger on the first page.
302. If the first position is in the first delineation area, the terminal device reads the first content included in the first delineation area.
303. The terminal device detects that the user's finger moves from a first position to a second position on the first page.
For the descriptions 301 to 303, reference may be specifically made to the descriptions 101 to 103 in the first embodiment, which are not described herein again.
304. The terminal equipment judges whether the second position is in a non-outlining area in the first page or not.
305. If so, no response is made.
306. And if not, determining the delineation area where the second position is located.
After execution 305, execution may continue 307 through 310, described below.
307. The terminal device determines that the second position is in the second delineation area.
308. The terminal equipment judges whether the first content is completely read.
The second delineation area and the first delineation area are different delineation areas on the first page.
309. And if so, the terminal equipment reports and reads the second content included in the second delineation area.
310. If not, the terminal equipment does not respond.
In this embodiment, when the terminal device is located in the second delineation area at the second location, it may be considered that the user needs to read the second content included in the second delineation area after reading the first content, and at this time, the user may read the second content after reading the first content.
Example four
As shown in fig. 4, the touch reading control method provided in the embodiment of the present invention may include the following steps:
401. the terminal device detects a first position of a user's finger on the first page.
In this embodiment, the first page may be a first page displayed on a display screen of the terminal device.
402. The terminal equipment acquires fingerprint information of a finger of a user.
Optionally, when the finger of the user is at the first position on the first page, the fingerprint information of the finger of the user may be acquired through an off-screen fingerprint technology.
403. And the terminal equipment compares the acquired fingerprint information of the user finger with the prestored fingerprint information.
404. If the fingerprint information of the user finger is matched with the pre-stored first fingerprint information of the user, the terminal equipment acquires a first voiceprint characteristic bound with the first fingerprint information.
Different pre-stored fingerprint information of the user is bound with different voiceprint characteristics.
405. If the first position is in the first delineation area, the terminal device adopts the first voiceprint feature to report the first content included in the first delineation area.
406. The terminal device detects that the user's finger moves from a first position to a second position on the first page.
407. The terminal device determines that the second position is in the second delineation area.
408. The terminal equipment judges whether the first content is completely read.
The second delineation area and the first delineation area are different delineation areas on the first page.
409. And if so, the terminal equipment reports and reads the second content included in the second delineation area.
410. If not, the terminal equipment does not respond.
Wherein, the second content can be reported by adopting the first voiceprint characteristic.
In this embodiment, different voiceprint features are bound to different pre-stored fingerprint information of the user, so that the user can read and input the different voiceprint features through the touch on the first page by using different fingers, and the user can select the voiceprint features more flexibly and quickly to read and report the first content.
EXAMPLE five
501. The terminal device detects a first position of a user's finger on the first page.
502. The terminal device acquires an image including facial feature information of a user.
503. The terminal device acquires facial feature information of the user from the image.
504. The terminal device compares the facial feature information of the user with facial feature information stored in advance.
Optionally, the terminal device may compare the facial feature information of the user with facial feature information stored in advance through a face recognition technology.
505. And if the facial feature information of the user is matched with the pre-stored first facial feature information, the terminal equipment acquires a second voiceprint feature bound with the first facial feature information.
506. If the first position is in the first delineation area, the terminal device adopts the second voiceprint feature to report the first content included in the first delineation area.
507. The terminal device detects that the user's finger moves from a first position to a second position on the first page.
508. The terminal device determines that the second position is in the second delineation area.
509. The terminal equipment judges whether the first content is completely read.
The second delineation area and the first delineation area are different delineation areas on the first page.
510. And if so, the terminal equipment reports and reads the second content included in the second delineation area.
511. If not, the terminal equipment does not respond.
Wherein, the second content can be reported by adopting the first voiceprint characteristic.
In this embodiment, because of the pre-stored user, in this embodiment, under the condition that the facial feature information of the user conforms to the pre-stored facial feature information, the terminal device may use the second voiceprint feature bound with the pre-stored facial feature information feature, so that the first content may be reported by using the voiceprint feature pre-stored by the user, and the personalized requirement of the user is met.
EXAMPLE six
As shown in fig. 6, an embodiment of the present invention provides a terminal device, where the terminal device includes:
a detection module 601, configured to detect a first position of a finger of a user on a first page;
a first reporting module 602, configured to report first content included in the first delineation area if the first position is in the first delineation area;
the detection module 601 is further configured to detect that a finger of the user moves from a first position to a second position on the first page;
a determining module 603, configured to determine that the second location is in the second delineation area;
a determining module 604, configured to determine whether the first content is completely read; the second delineating area and the first delineating area are different delineating areas on the first page;
a second reporting module 605, configured to report the second content included in the second delineation area if the second content is included in the second delineation area.
In the embodiment of the invention, the terminal equipment can detect the first position of the finger of the user on the first page; if the first position is in a first delineation area, reporting and reading first content included in the first delineation area; detecting that a user's finger is moving from a first position to a second position on the first page; determining that the second position is in a second delineation area, and judging whether the reading of the first content is finished; the second delineating area and the first delineating area are different delineating areas on the first page;
and if so, reading second content included in the second delineation area. According to the scheme, after the user moves the finger, whether the position of the moved user finger exceeds the first hooking area where the user finger is located originally (before moving) can be judged, and after the position of the moved user finger is located in the new second hooking area and the first content included in the first hooking area is completely reported, the second content included in the second hooking area is determined to be reported and read by the user indication, and the second content is reported and read.
Optionally, the determining module 603 is further configured to determine that the second position is still in the first delineation area after detecting that the finger of the user moves from the first position to the second position on the first page;
the determining module 604 is configured to determine whether the first content is completely read;
the first reporting module 602 is further configured to report the first content again if yes.
Optionally, the determining module 604 is further configured to determine whether the second location is in a non-delineating area in the first page before the determining module 603 determines that the second location is in the second delineating area;
the determining module 603 is further configured to determine, if not, the delineation area where the second location is located.
Optionally, the first page is an electronic page, and with reference to fig. 6, as shown in fig. 7, the terminal device further includes:
a fingerprint obtaining module 606, configured to obtain fingerprint information of the user finger after the detecting module 601 detects the first position of the user finger on the first page and before the first reading module 602 reads the first content included in the first delineation area;
a fingerprint comparison module 607, configured to compare the acquired fingerprint information of the user's finger with pre-stored fingerprint information;
a voiceprint obtaining module 608, configured to obtain a first voiceprint feature bound with first fingerprint information if fingerprint information of a finger of the user matches with pre-stored first fingerprint information of the user, where different voiceprint features are bound with different pre-stored different fingerprint information of the user;
the first reading module 602 is specifically configured to read a first content included in the first delineation area by using a first voiceprint feature.
Different voiceprint characteristics are bound to different pre-stored fingerprint information of the user, so that the user can read and input the different voiceprint characteristics by clicking different fingers on the first page, and the different voiceprint characteristics are acquired, so that the voiceprint characteristics can be selected more flexibly and quickly to read and report the first content.
Optionally, the first page is a paper page, and with reference to fig. 6, as shown in fig. 8, the terminal device further includes:
an image acquisition module 609, configured to acquire an image including facial feature information of a user; the facial feature acquisition module is used for acquiring facial feature information of a user from the image;
a face comparison module 610 for comparing the facial feature information of the user with the facial feature information stored in advance;
a voiceprint feature acquisition module 611, configured to acquire a second voiceprint feature bound with first facial feature information if the facial feature information of the user matches with the first facial feature information stored in advance;
the first reading module 602 is specifically configured to read the first content included in the first delineation area by using the second voiceprint feature.
In the embodiment of the prestored user, under the condition that the facial feature information of the user conforms to the prestored facial feature information, the terminal device can use the second voiceprint feature bound with the prestored facial feature information feature, so that the first content can be read by using the voiceprint feature prestored by the user, and the personalized requirements of the user are met.
As an optional implementation manner, in a case that the first page is a paper page, the terminal device may further include the following modules, not shown in the drawing:
the display module is configured to display the electronic book page corresponding to the first page after the first reading module 602 reads the first content, and highlight the first content in the electronic book page.
Through the optional implementation mode, the point-reading control system can display the electronic book page corresponding to the first paper book page and highlight the first content in the electronic book page, so that a user can clearly and visually see the first content in the point-reading process of the first content and can see the electronic book page where the first content is located, and user experience is improved.
Further, as an optional implementation manner, after the display module displays the electronic book page, the terminal device may further include the following modules, not shown in the drawings:
and the receiving module is used for receiving the point reading input of the user on the second content on the electronic book page.
The display module is further configured to highlight the third content in response to the click-to-read input, and report the third content. The third content may be content on the electronic book page other than the first content.
Through this optional implementation, after the user once reads on the paper page, the control system of reading can show corresponding electron page, and the user can continue reading on the electron page when next reading, thereby realizes the switching that the paper page is read with the electron page point, makes the mode of reading more nimble.
As shown in fig. 9, an embodiment of the present invention further provides a terminal device, where the terminal device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute the click-to-read control method executed by the terminal device in the above embodiments of the methods.
It should be noted that the terminal device shown in fig. 9 may further include components, which are not shown, such as a battery, an input key, a speaker, a microphone, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
Embodiments of the present invention provide a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention further provide an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product, when running on a computer, causes the computer to perform some or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
The terminal device provided by the embodiment of the present invention can implement each process shown in the above method embodiments, and is not described herein again to avoid repetition.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
Claims (10)
1. A point reading control method is characterized by comprising the following steps:
detecting a first position of a user finger on a first page;
if the first position is in a first delineation area, reporting and reading first content included in the first delineation area;
detecting that a user's finger is moving from a first position to a second position on the first page;
determining that the second position is in a second delineation area, and judging whether the reading of the first content is finished; the second delineating area and the first delineating area are different delineating areas on the first page;
and if so, reading second content included in the second delineation area.
2. The method of claim 1, wherein after detecting a movement of a user's finger from a first position to a second position on the first page, the method further comprises:
determining that the second location is still in the first delineation region;
judging whether the first content is completely read or not;
if yes, the first content is reported again.
3. The method of claim 1, wherein prior to said determining that the second location is within the second delineation region, the method further comprises:
judging whether the second position is in a non-outlining area in the first page or not;
and if not, determining the delineation area where the second position is located.
4. The method according to any one of claims 1 to 3, wherein the first page is an electronic page, and after detecting the first position of the finger of the user on the first page, before reporting the first content included in the first delineation area if the first position is in the first delineation area, the method further comprises:
acquiring fingerprint information of the user finger;
comparing the acquired fingerprint information of the user finger with the prestored fingerprint information;
if the fingerprint information of the user finger is matched with pre-stored first fingerprint information of the user, acquiring first voiceprint characteristics bound with the first fingerprint information, wherein different pre-stored different fingerprint information of the user are bound with different voiceprint characteristics;
the reading of the first content included in the first delineation area includes:
and adopting the first voiceprint characteristic to read the first content included in the first delineating area.
5. The method of any of claims 1-3, wherein the first page is a paper page, and after detecting the first position of the user's finger on the first page, and before reading the first content included in the first delineation area if the first position is in the first delineation area, the method further comprises:
acquiring an image including facial feature information of a user;
acquiring facial feature information of the user from the image;
comparing the facial feature information of the user with pre-stored facial feature information;
if the facial feature information of the user is matched with first facial feature information stored in advance, acquiring a second voiceprint feature bound with the first facial feature information;
the reading of the first content included in the first delineation area includes:
and adopting the second acoustic line characteristic to read first content included in the first delineation area.
6. A terminal device, comprising:
the detection module is used for detecting a first position of a finger of a user on a first page;
the first reading module is used for reading first content included in a first delineation area if the first position is in the first delineation area;
the detection module is further used for detecting that the finger of the user moves from a first position to a second position on the first page;
the determining module is configured to determine that the second position is in a second delineation area;
the judging module is used for judging whether the first content is completely read; the second delineating area and the first delineating area are different delineating areas on the first page;
and the second reading module is used for reading the second content included in the second delineation area if the second content is contained in the second delineation area.
7. The terminal device of claim 6,
the determining module is further configured to determine that the second position is still in the first delineation area after detecting that the user's finger moves from the first position to the second position on the first page;
the judging module is further used for judging whether the first content is completely read;
and the first reporting module is used for reporting and reading the first content again if the first content is the first content.
8. The terminal device of claim 6,
the determining module is further configured to determine whether the second location is in a non-outlining area in the first page before the determining module determines that the second location is in a second outlining area;
the determining module is further configured to determine the delineation area where the second position is located if the second position is not located.
9. The terminal device according to any one of claims 6 to 8, wherein the first page is an electronic page, the terminal device further comprising:
a fingerprint obtaining module, configured to obtain fingerprint information of a user finger after the detecting moachi detects a first position of the user finger on a first page and before the first reading module reads first content included in the first delineation area;
the fingerprint comparison module is used for comparing the acquired fingerprint information of the user finger with the prestored fingerprint information;
the voiceprint acquisition module is used for acquiring a first voiceprint characteristic bound with first fingerprint information if the fingerprint information of the user finger is matched with the prestored first fingerprint information of the user, wherein different voiceprint characteristics are bound with the prestored different fingerprint information of the user;
the first reading module is specifically configured to read the first content included in the first delineation area by using the first voiceprint feature.
10. The terminal device according to any one of claims 6 to 8, wherein the first page is a paper page, the terminal device further comprising:
an image acquisition module for acquiring an image including facial feature information of a user; the facial feature acquisition module is used for acquiring facial feature information of the user from the image;
the face comparison module is used for comparing the facial feature information of the user with the pre-stored facial feature information;
the voiceprint feature acquisition module is used for acquiring a second voiceprint feature bound with first facial feature information if the facial feature information of the user is matched with the first facial feature information stored in advance;
the first reading module is specifically configured to read, by using the second voiceprint feature, the first content included in the first delineation area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910499986.4A CN111077978A (en) | 2019-06-09 | 2019-06-09 | Point reading control method and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910499986.4A CN111077978A (en) | 2019-06-09 | 2019-06-09 | Point reading control method and terminal equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111077978A true CN111077978A (en) | 2020-04-28 |
Family
ID=70310069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910499986.4A Pending CN111077978A (en) | 2019-06-09 | 2019-06-09 | Point reading control method and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111077978A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1971661A (en) * | 2006-11-28 | 2007-05-30 | 东莞市步步高教育电子产品有限公司 | Control method to play content stored in point reading machine |
CN104157171A (en) * | 2014-08-13 | 2014-11-19 | 三星电子(中国)研发中心 | Point-reading system and method thereof |
CN204117387U (en) * | 2014-07-16 | 2015-01-21 | 北京网梯科技发展有限公司 | A kind of equipment realizing personalized reading |
US20150147739A1 (en) * | 2013-11-27 | 2015-05-28 | Bunny Land Co., Ltd. | Learning system using oid pen and learning method thereof |
CN106980459A (en) * | 2017-03-31 | 2017-07-25 | 广州华多网络科技有限公司 | Reading method and device based on touch-screen equipment |
CN107748615A (en) * | 2017-11-07 | 2018-03-02 | 广东欧珀移动通信有限公司 | Control method, device, storage medium and the electronic equipment of screen |
CN108037882A (en) * | 2017-11-29 | 2018-05-15 | 佛山市因诺威特科技有限公司 | A kind of reading method and system |
CN108843994A (en) * | 2018-07-10 | 2018-11-20 | 广东小天才科技有限公司 | A kind of learning interaction method and intelligent desk lamp based on intelligent desk lamp |
CN109003476A (en) * | 2018-07-18 | 2018-12-14 | 深圳市本牛科技有限责任公司 | A kind of finger point-of-reading system and its operating method and device using the system |
-
2019
- 2019-06-09 CN CN201910499986.4A patent/CN111077978A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1971661A (en) * | 2006-11-28 | 2007-05-30 | 东莞市步步高教育电子产品有限公司 | Control method to play content stored in point reading machine |
US20150147739A1 (en) * | 2013-11-27 | 2015-05-28 | Bunny Land Co., Ltd. | Learning system using oid pen and learning method thereof |
CN204117387U (en) * | 2014-07-16 | 2015-01-21 | 北京网梯科技发展有限公司 | A kind of equipment realizing personalized reading |
CN104157171A (en) * | 2014-08-13 | 2014-11-19 | 三星电子(中国)研发中心 | Point-reading system and method thereof |
CN106980459A (en) * | 2017-03-31 | 2017-07-25 | 广州华多网络科技有限公司 | Reading method and device based on touch-screen equipment |
CN107748615A (en) * | 2017-11-07 | 2018-03-02 | 广东欧珀移动通信有限公司 | Control method, device, storage medium and the electronic equipment of screen |
CN108037882A (en) * | 2017-11-29 | 2018-05-15 | 佛山市因诺威特科技有限公司 | A kind of reading method and system |
CN108843994A (en) * | 2018-07-10 | 2018-11-20 | 广东小天才科技有限公司 | A kind of learning interaction method and intelligent desk lamp based on intelligent desk lamp |
CN109003476A (en) * | 2018-07-18 | 2018-12-14 | 深圳市本牛科技有限责任公司 | A kind of finger point-of-reading system and its operating method and device using the system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107247519B (en) | Input method and device | |
CN106251869B (en) | Voice processing method and device | |
CN107885823B (en) | Audio information playing method and device, storage medium and electronic equipment | |
CN111831806B (en) | Semantic integrity determination method, device, electronic equipment and storage medium | |
CN107230137A (en) | Merchandise news acquisition methods and device | |
CN109871843A (en) | Character identifying method and device, the device for character recognition | |
CN111984180B (en) | Terminal screen reading method, device, equipment and computer readable storage medium | |
CN111077996B (en) | Information recommendation method and learning device based on click-to-read | |
CN111078829B (en) | Click-to-read control method and system | |
CN108881979B (en) | Information processing method and device, mobile terminal and storage medium | |
CN111078102B (en) | Method for determining point reading area through projection and terminal equipment | |
EP3644177A1 (en) | Input method, device, apparatus, and storage medium | |
CN113407828A (en) | Searching method, searching device and searching device | |
JP7165637B2 (en) | Intelligent interaction method, intelligent interaction device, smart device and computer readable storage medium | |
CN111079499B (en) | Writing content identification method and system in learning environment | |
CN111596832B (en) | Page switching method and device | |
CN111079503B (en) | Character recognition method and electronic equipment | |
CN105898053B (en) | A kind of communications records processing equipment, method and mobile terminal | |
CN111077978A (en) | Point reading control method and terminal equipment | |
CN111078084B (en) | Point reading control method and terminal equipment | |
CN113901832A (en) | Man-machine conversation method, device, storage medium and electronic equipment | |
CN111077991A (en) | Point reading control method and terminal equipment | |
CN106649698B (en) | Information processing method and information processing device | |
CN106161208A (en) | A kind of information that carries out in the application specifies device, method and the mobile terminal shared | |
CN111553356A (en) | Character recognition method and device, learning device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200428 |
|
RJ01 | Rejection of invention patent application after publication |