CN111077997A - Point reading control method in point reading mode and electronic equipment - Google Patents

Point reading control method in point reading mode and electronic equipment Download PDF

Info

Publication number
CN111077997A
CN111077997A CN201910498915.2A CN201910498915A CN111077997A CN 111077997 A CN111077997 A CN 111077997A CN 201910498915 A CN201910498915 A CN 201910498915A CN 111077997 A CN111077997 A CN 111077997A
Authority
CN
China
Prior art keywords
area
user
finger
electronic equipment
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910498915.2A
Other languages
Chinese (zh)
Other versions
CN111077997B (en
Inventor
彭婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910498915.2A priority Critical patent/CN111077997B/en
Publication of CN111077997A publication Critical patent/CN111077997A/en
Application granted granted Critical
Publication of CN111077997B publication Critical patent/CN111077997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention relates to the technical field of electronic equipment, and discloses a point reading control method in a point reading mode and the electronic equipment, wherein the point reading control method comprises the following steps: when the electronic equipment is in a point reading mode, shooting a target image when a finger of a user is in a learning area; determining the projection area of the finger of the user in the learning area from the target image; judging whether the projection area is larger than a preset standard area, wherein the standard area is a touch area acquired in advance when a finger touches a learning area; and if so, controlling the electronic equipment to suspend broadcasting the content corresponding to the projection area in the learning area. By implementing the embodiment of the invention, the position of the finger of the user can be determined according to the size relation between the projection area and the standard area, so that the point reading control of the electronic equipment is realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of finger position identification.

Description

Point reading control method in point reading mode and electronic equipment
Technical Field
The invention relates to the technical field of electronic equipment, in particular to a point reading control method in a point reading mode and electronic equipment.
Background
With the rapid development of electronic devices, more and more electronic devices (such as family education machines) can meet the reading demand of students. Currently, the way for the electronic device to realize point reading is generally as follows: the electronic equipment detects the contents needing to be read and clicked by the student on the book, and then plays the audio matched with the contents. However, in practice, it is found that, because the current electronic device cannot identify the space state of the finger, when the finger is in an empty state between the book page and the camera, the electronic device may identify that the finger performs a click operation on the book page, so that a situation of a finger position identification error occurs, and the use experience of the user is affected.
Disclosure of Invention
The embodiment of the invention discloses a point reading control method and electronic equipment in a point reading mode, which can improve the accuracy of finger position identification, thereby improving the use experience of a user.
The first aspect of the embodiments of the present invention discloses a touch reading control method in a touch reading mode, where the method includes:
when the electronic equipment is in a point reading mode, shooting a target image when a finger of a user is in a learning area;
determining the projection area of the finger of the user in the learning area from the target image;
judging whether the projection area is larger than a preset standard area, wherein the standard area is a touch area acquired in advance when a finger touches the learning area;
and if so, controlling the electronic equipment to suspend broadcasting the content corresponding to the projection area in the learning area.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after determining that the projected area is larger than the preset standard area, the method further includes:
calculating a current proportion of the projected area relative to the standard area;
judging whether the current proportion is larger than a preset proportion or not;
and if the preset proportion is less than or equal to the preset proportion, outputting prompt information, and executing the control of the electronic equipment to pause and broadcast the content corresponding to the projection area in the learning area, wherein the prompt information is used for prompting a user that the finger of the user is not in contact with the learning area.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, when it is determined that the current ratio is greater than the preset ratio, the method further includes:
judging whether a display screen of the electronic equipment detects a pressing operation;
if yes, determining that the user finger executes touch operation on the display screen;
and identifying an operation instruction corresponding to the touch operation, and controlling the electronic equipment to execute a target operation corresponding to the operation instruction.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before determining whether the projected area is larger than a preset standard area, the method further includes:
acquiring a plurality of pre-stored historical images, wherein each historical image is an image of the user finger in the learning area;
acquiring historical projection areas of the fingers of the user in the learning areas in the historical images;
and selecting the smallest historical projection area from a plurality of historical projection areas as a standard area.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after determining that the projection area is smaller than or equal to a preset standard area, the method further includes:
acquiring the text content corresponding to the projection area in the learning area;
determining the text content as click-to-read content, and acquiring standard pronunciation of the click-to-read content;
and broadcasting the standard pronunciation through a loudspeaker of the electronic equipment.
A second aspect of an embodiment of the present invention discloses an electronic device, including:
the shooting unit is used for shooting a target image when the finger of the user is in a learning area when the electronic equipment is in a point reading mode;
a first determination unit configured to determine a projection area of the user's finger in the learning region from the target image;
the first judgment unit is used for judging whether the projection area is larger than a preset standard area, wherein the standard area is a touch area acquired in advance when a finger touches the learning area;
and the control unit is used for controlling the electronic equipment to suspend broadcasting the content corresponding to the projection area in the learning area when the judgment result of the first judgment unit is yes.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
the calculating unit is used for calculating the current proportion of the projection area relative to the standard area when the judgment result of the first judging unit is yes;
the second judging unit is used for judging whether the current proportion is larger than a preset proportion or not;
and the output unit is used for outputting prompt information and triggering the control unit to execute the control of the electronic equipment to suspend broadcasting of the content corresponding to the projection area in the learning area when the judgment result of the second judgment unit is negative, wherein the prompt information is used for prompting a user that the finger of the user is not in contact with the learning area.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
a third judging unit, configured to judge whether a pressing operation is detected on a display screen of the electronic device when a judgment result of the second judging unit is yes;
a second determining unit, configured to determine that the user finger performs a touch operation on the display screen when a determination result of the third determining unit is yes;
and the identification unit is used for identifying the operation instruction corresponding to the touch operation and controlling the electronic equipment to execute the target operation corresponding to the operation instruction.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
a first obtaining unit, configured to obtain a plurality of pre-stored history images before the first determining unit determines whether the projection area is larger than a preset standard area, where each history image is an image of the user finger in the learning area;
a second acquisition unit configured to acquire a history projection area of the user's finger in the learning area in each of the history images;
and the selecting unit is used for selecting the smallest historical projection area from the plurality of historical projection areas as a standard area.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the electronic device further includes:
a third acquiring unit, configured to acquire text content corresponding to the projection area in the learning area if the determination result of the first determining unit is negative;
a third determining unit, configured to determine the text content as a click-to-read content, and acquire a standard pronunciation of the click-to-read content;
and the broadcasting unit is used for broadcasting the standard pronunciation through a loudspeaker of the electronic equipment.
A third aspect of the embodiments of the present invention discloses another electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform part or all of the steps of any one of the methods of the first aspect.
A fourth aspect of the present embodiments discloses a computer-readable storage medium storing a program code, where the program code includes instructions for performing part or all of the steps of any one of the methods of the first aspect.
A fifth aspect of embodiments of the present invention discloses a computer program product, which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect of the present embodiment discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product is configured to, when running on a computer, cause the computer to perform part or all of the steps of any one of the methods in the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, when the electronic equipment is in a point reading mode, a target image of a finger of a user in a learning area is shot; determining the projection area of the finger of the user in the learning area from the target image; judging whether the projection area is larger than a preset standard area, wherein the standard area is a touch area acquired in advance when a finger touches a learning area; and if so, controlling the electronic equipment to suspend broadcasting the content corresponding to the projection area in the learning area. Therefore, by implementing the embodiment of the invention, the projection area of the finger in the learning area can be determined from the shot image containing the finger of the user, the projection area is compared with the standard area, and the position of the finger of the user is determined according to the size relation between the projection area and the standard area, so that the click-reading control of the electronic equipment is realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of finger position identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a touch reading control method in a touch reading mode according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a touch reading control method in another touch reading mode disclosed in the embodiment of the present invention;
fig. 3 is a schematic flow chart of a touch reading control method in another touch reading mode disclosed in the embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
FIG. 5 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
FIG. 6 is a schematic structural diagram of another electronic device disclosed in the embodiments of the present invention;
fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a point reading control method in a point reading mode and electronic equipment, which can realize point reading control on the electronic equipment according to the determined position of a finger, and improve the use experience of a user on the basis of improving the accuracy rate of finger position identification. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a touch reading control method in a touch reading mode according to an embodiment of the present invention. As shown in fig. 1, the touch reading control method in the touch reading mode may include the following steps:
101. when the electronic device is in a point-and-read mode, the electronic device captures a target image of a user's finger when the user's finger is in a learning area.
In the embodiment of the invention, the electronic equipment can be a family education machine, a learning tablet and the like. The click-to-read state can be that the electronic equipment detects whether the user performs click operation on the learning area in real time, and if so, the electronic equipment can acquire click-to-read content corresponding to the click operation from the learning area and output standard audio of the click-to-read content. The electronic equipment can collect images of fingers of a user in a learning area through the image collecting equipment, the image collecting equipment can be a camera, a camera and the like, the image collecting equipment can be arranged at any position on the electronic equipment and can also be arranged at a place outside the electronic equipment, and the image collecting equipment can be in communication connection with the electronic equipment in a wireless or wired mode so that the image collecting equipment can transmit the collected images to the electronic equipment. Because the target image of the user finger shot by the image acquisition device when the user finger is in the learning area is a two-dimensional image, the actual position of the user finger in the target image can be any position between the learning area and the image acquisition device, and according to the principle of big-end-up, if the user finger is closer to the image acquisition device (i.e. the user finger is farther away from the learning area), the projected area of the shot user finger in the learning area is larger; if the user's finger is farther away from the image capture device (i.e., the user's finger is closer to the learning area), the projected area of the photographed user's finger in the learning area is smaller.
102. The electronic equipment determines the projection area of the finger of the user in the learning area from the target image.
As an alternative embodiment, the manner in which the electronic device determines the projection area of the user's finger in the learning area from the target image may include the following steps:
the electronic equipment carries out binarization processing on the target image to obtain a target black-and-white image;
the electronic equipment identifies a plurality of connected areas from the target black-and-white image;
the electronic equipment calculates the area of each connected region;
the electronic equipment determines the communication area with the largest area as a target communication area;
the electronic device determines the area of the target connected region as the projected area of the user's finger in the learning region.
In this embodiment, since it is more accurate to identify the feature points in the black-and-white image, the target image may be processed into the target black-and-white image through binarization processing, and the target connected region with the largest area is identified from the target black-and-white image, and since the learning region may contain more characters, the area of the connected region corresponding to the projection of the finger in the learning region is larger than the area of the connected region in the characters, the electronic device may determine the connected region with the largest identified area as the target connected region of the projection of the finger of the user in the learning region, so that the determined projection area of the finger of the user in the learning region is more accurate.
103. The electronic equipment judges whether the projection area is larger than a preset standard area, if so, the step 104 is executed; if not, the process is ended, and the standard area is the touch area acquired in advance when the finger touches the learning area.
104. The electronic equipment controls the electronic equipment to pause and broadcast the content corresponding to the projection area in the learning area.
In the embodiment of the present invention, if the electronic device detects that the projection area is larger than the preset standard area, it may be considered that the finger of the user is suspended between the learning area and the image acquisition device, that is, the finger of the user does not click the content to be read in the learning area, so that the electronic device needs to be controlled to suspend broadcasting the content corresponding to the projection area in the learning area.
As an alternative implementation, after the electronic device performs step 104, the following steps may also be performed:
the electronic equipment shoots a video containing the fingers of the user when the fingers of the user are in a shooting area of the image acquisition equipment;
the electronic equipment analyzes the user fingers in the video to obtain a target motion gesture corresponding to the user fingers;
the electronic equipment detects whether the target motion gesture is matched with a motion gesture corresponding to any preset finger instruction;
if yes, the electronic equipment determines a target finger instruction matched with the target motion gesture;
and the electronic equipment is controlled by the electronic equipment to execute the operation corresponding to the target finger instruction.
When the embodiment is implemented, whether the user inputs any instruction to the electronic equipment through the motion gesture of the finger can be detected when the finger of the user is determined not to be in contact with the learning area and is in the shooting area of the image acquisition equipment of the electronic equipment, so that the response speed of the instruction input by the user through the finger is increased.
In the method described in fig. 1, the click-to-read control of the electronic device can be realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of finger position identification. In addition, the method described in fig. 1 is implemented, so that the determined projection area of the user finger in the learning area is more accurate. In addition, the method described in fig. 1 is implemented, and the response speed of the user to the instruction input by the finger is improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of a touch reading control method in another touch reading mode according to an embodiment of the present invention. As shown in fig. 2, the touch reading control method in the touch reading mode may include the following steps:
201. when the electronic device is in a point-and-read mode, the electronic device captures a target image of a user's finger when the user's finger is in a learning area.
202. The electronic equipment determines the projection area of the finger of the user in the learning area from the target image.
203. The electronic equipment judges whether the projection area is larger than a preset standard area, if so, the step 204 to the step 205 are executed; if not, the process is ended, and the standard area is the touch area acquired in advance when the finger touches the learning area.
204. The electronic device calculates a current ratio of the projected area to the standard area.
In the embodiment of the invention, the electronic equipment can calculate the quotient of the projection area divided by the standard area, and the quotient is used as the current proportion of the projection area relative to the standard area, so that the calculation of the current proportion is simplified.
205. The electronic device judges whether the current proportion is larger than a preset proportion, if so, the step 208 is executed; if not, step 206-step 207 are executed.
In the embodiment of the present invention, the determination of the preset ratio may be determined according to a distance between a display screen of the electronic device and the learning area, the electronic device may collect a current image when the user's finger contacts the display screen, and calculate to obtain a minimum current finger area in the current image, the electronic device calculates a current ratio of the current finger area to the standard area, and may determine the current ratio as the preset ratio, since the current finger area smaller than the preset ratio may be regarded as that the user's finger is located between the learning area and the display screen of the electronic device, when the electronic device detects that the current ratio is larger than the preset ratio, it may be regarded that the user's finger needs to contact the display screen of the electronic device, when the electronic device detects that the current ratio is smaller than or equal to the preset ratio, it may be regarded that the user's finger does not contact the,
206. the electronic equipment outputs prompt information, wherein the prompt information is used for prompting the user that the finger of the user is not in contact with the learning area.
As an alternative implementation, after the electronic device performs step 206, the following steps may also be performed:
the electronic equipment collects the sound in the environment where the electronic equipment is located through the audio collection equipment;
the electronic equipment detects whether the voice of a user of the electronic equipment is contained in the sound;
if the speech of the user of the electronic equipment is contained, the electronic equipment performs semantic recognition on the speech and judges whether the speech contains a language indicating that the electronic equipment is recognized wrongly;
if the language indicating the electronic equipment to identify errors is contained, the electronic equipment detects the current distance between the finger of the user and the ultrasonic sensor through the ultrasonic sensor and acquires a pre-stored standard distance between the ultrasonic sensor and the learning area;
the electronic equipment calculates whether the absolute value of the difference value between the current distance and the standard distance is larger than a preset minimum error or not;
if the error is smaller than or equal to the preset minimum error, the electronic equipment determines that the finger of the user is in contact with the learning area;
and if the error is larger than the preset minimum error, the electronic equipment determines that the finger of the user is not in contact with the learning area, and outputs secondary prompt information, wherein the secondary prompt information is used for prompting the user of the electronic equipment that the finger of the user is not in contact with the learning area.
By implementing the implementation mode, whether the user feeds back the information of the contact between the user finger and the learning area in a voice mode can be detected when the fact that the user finger is not in contact with the learning area is detected for the first time, if the user feeds back the information, the electronic equipment can further measure the distance between the user finger and the ultrasonic sensor through the ultrasonic sensor, so that the electronic equipment detects the position relation between the user finger and the learning area through the ultrasonic sensor for the second time, and the accuracy of the electronic equipment for detecting the position relation between the user finger and the learning area is improved.
207. The electronic equipment controls the electronic equipment to pause and broadcast the content corresponding to the projection area in the learning area.
In the embodiment of the present invention, by implementing the above steps 204 to 207, the current ratio of the projection area to the standard area may be calculated, the position state of the finger of the user may be determined according to the current ratio, and an operation that the electronic device needs to execute according to the position of the finger of the user may be prompted to the user, so as to improve the interactivity between the user and the electronic device.
208. The electronic equipment judges whether the display screen of the electronic equipment detects the pressing operation, if so, the step 209 to the step 210 are executed; if not, the flow is ended.
209. The electronic device determines that a user finger performs a touch operation on the display screen.
In the embodiment of the invention, when the electronic equipment detects that the pressing operation exists on the display screen, the finger of the user can be considered to be in contact with the display screen, so that the finger of the user can be determined to execute the touch operation on the display screen.
210. The electronic equipment identifies an operation instruction corresponding to the touch operation and controls the electronic equipment to execute a target operation corresponding to the operation instruction.
In the embodiment of the present invention, by implementing the above-mentioned steps 208 to 210, it may be detected whether the user's finger performs a touch operation on the display screen under the condition that it is determined that the user's finger does not contact the learning region, and since the electronic device detects that the user's finger does not contact the learning region but appears in a region between the learning region and the image capturing device, the electronic device may think that the user's finger may perform any operation on the display screen of the electronic device, thereby improving the sensitivity of detecting the operation performed by the user on the display screen.
In the method described in fig. 2, the click-to-read control of the electronic device can be realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of finger position identification. In addition, the method described in fig. 2 is implemented, so that the accuracy of detecting the position relationship between the user finger and the learning area by the electronic device is improved. In addition, the method described in fig. 2 is implemented to improve the interactivity between the user and the electronic device. In addition, implementing the method described in fig. 2 increases the sensitivity of detecting operations performed by a user on the display screen.
EXAMPLE III
Referring to fig. 3, fig. 3 is a flowchart illustrating a touch-reading control method in another touch-reading mode according to an embodiment of the present invention. As shown in fig. 3, the touch reading control method in the touch reading mode may include the following steps:
301. the electronic equipment acquires a plurality of pre-stored historical images, wherein each historical image is an image of a user finger in a learning area.
302. The electronic device acquires the historical projection area of the user finger in the learning area in each historical image.
303. The electronic equipment selects the smallest historical projection area from the plurality of historical projection areas as a standard area.
In the embodiment of the present invention, by implementing the above steps 301 to 303, the previously stored history images captured in the past may be acquired, and the history images may also include the user's finger and the learning area, and the electronic device may calculate the projection area of the user's finger in the learning area in each history image, and determine the minimum projection area as the standard area.
Optionally, steps 301 to 303 may be performed before or after any step before step 306, which does not affect the embodiment of the present invention.
304. When the electronic device is in a point-and-read mode, the electronic device captures a target image of a user's finger when the user's finger is in a learning area.
305. The electronic equipment determines the projection area of the finger of the user in the learning area from the target image.
306. The electronic device determines whether the projected area is larger than a preset standard area, if so, step 307 is executed; if not, executing the step 308 to the step 310, wherein the standard area is the touch area of the finger touching the learning area acquired in advance.
307. The electronic equipment controls the electronic equipment to pause and broadcast the content corresponding to the projection area in the learning area.
308. The electronic device acquires the text content corresponding to the projection area in the learning area.
In the embodiment of the present invention, the electronic device may perform image analysis on the projection area in the learning area to identify the fingertip of the user finger from the projection area, and the electronic device may obtain the selection area corresponding to the fingertip of the user finger in the learning area, and further read the text content from the selection area in the learning area.
309. The electronic equipment determines the text content as the click-to-read content and acquires the standard pronunciation of the click-to-read content.
310. The electronic equipment broadcasts the standard pronunciation through the speaker of the electronic equipment.
In the embodiment of the present invention, when the above steps 308 to 310 are implemented, in a case that it is determined that the finger of the user is in contact with the learning area, the text content corresponding to the projection area of the finger of the user in the learning area is obtained, and the text content is used as the click-to-read content, and the standard pronunciation of the click-to-read content is output, so that the electronic device immediately obtains the pronunciation of the click-to-read content indicated by the user after determining that the finger of the user is in contact with the learning area, thereby increasing the response speed of the electronic device to the detected operation triggered by the user in the learning area.
In the method described in fig. 3, the click-to-read control of the electronic device can be realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of finger position identification. In addition, the method described in fig. 3 can be implemented to make the determined standard projection area more consistent with the area where the user's finger actually touches the learning area. In addition, the method described in fig. 3 is implemented to improve the response speed of the electronic device to the detected operation triggered by the user in the learning area.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 4, the electronic device may include:
a shooting unit 401, configured to shoot a target image when the finger of the user is in the learning area when the electronic device is in the point-and-read mode.
A first determination unit 402 configured to determine a projection area of the user's finger in the learning area from the target image captured by the capturing unit 401.
As an alternative embodiment, the way that the first determining unit 402 determines the projection area of the user finger in the learning area from the target image may specifically be:
carrying out binarization processing on the target image to obtain a target black-and-white image;
identifying a plurality of connected areas from the target black-and-white image;
calculating the area of each communication area;
determining the communication area with the largest area as a target communication area;
and determining the area of the target connected region as the projection area of the finger of the user in the learning region.
In this embodiment, since it is more accurate to identify the feature points in the black-and-white image, the target image may be processed into the target black-and-white image through binarization processing, and the target connected region with the largest area is identified from the target black-and-white image, and since the learning region may contain more characters, the area of the connected region corresponding to the projection of the finger in the learning region is larger than the area of the connected region in the characters, the electronic device may determine the connected region with the largest identified area as the target connected region of the projection of the finger of the user in the learning region, so that the determined projection area of the finger of the user in the learning region is more accurate.
A first judging unit 403, configured to judge whether the projection area determined by the first determining unit 402 is larger than a preset standard area, where the standard area is a touch area acquired in advance when the finger touches the learning area.
A control unit 404, configured to control the electronic device to suspend broadcasting the content corresponding to the projection area in the learning area when the determination result of the first determining unit 403 is yes.
As an alternative implementation, the control unit 404 may be further configured to:
shooting a video containing the user finger when the user finger is in a shooting area of the image acquisition device;
analyzing the user fingers in the video to obtain a target motion gesture corresponding to the user fingers;
detecting whether the target motion gesture is matched with a motion gesture corresponding to any preset finger instruction;
if yes, determining a target finger instruction matched with the target motion gesture;
and controlling the electronic equipment to execute the operation corresponding to the target finger instruction.
When the embodiment is implemented, whether the user inputs any instruction to the electronic equipment through the motion gesture of the finger can be detected when the finger of the user is determined not to be in contact with the learning area and is in the shooting area of the image acquisition equipment of the electronic equipment, so that the response speed of the instruction input by the user through the finger is increased.
Therefore, by implementing the electronic device described in fig. 4, the click-to-read control of the electronic device can be realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of the finger position identification. In addition, the electronic device described in fig. 4 is implemented, so that the determined projection area of the user finger in the learning area is more accurate. In addition, the electronic apparatus described in fig. 4 is implemented, but the response speed to the instruction input by the user through the finger is improved.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 5 is optimized from the electronic device shown in fig. 4. The electronic device shown in fig. 5 may further include:
a calculating unit 405, configured to calculate a current ratio of the projected area to the standard area when the determination result of the first determining unit 403 is yes.
A second determining unit 406, configured to determine whether the current ratio calculated by the calculating unit 405 is greater than the preset ratio.
And an output unit 407, configured to output a prompt message when the determination result of the second determining unit 406 is negative, and trigger the control unit 404 to execute controlling of the electronic device to suspend broadcasting of the content corresponding to the projection area in the learning area, where the prompt message is used to prompt the user that the finger of the user is not in contact with the learning area.
In the embodiment of the invention, the current proportion of the projection area relative to the standard area can be calculated, the position state of the finger of the user is determined according to the current proportion, and the operation required to be executed by the electronic equipment according to the position of the finger of the user can be prompted to the user, so that the interactivity between the user and the electronic equipment is improved.
As an optional implementation, the output unit 407 may further be configured to:
collecting sound in the environment where the electronic equipment is located through audio collection equipment;
detecting whether the voice of a user of the electronic equipment is contained in the sound;
if the speech of the user of the electronic equipment is contained, performing semantic recognition on the speech, and judging whether the speech contains a language indicating that the electronic equipment is recognized wrongly;
if the language indicating the electronic equipment to identify errors is contained, detecting the current distance between the fingers of the user and the ultrasonic sensor through the ultrasonic sensor, and acquiring a pre-stored standard distance between the ultrasonic sensor and a learning area;
calculating whether the absolute value of the difference value between the current distance and the standard distance is greater than a preset minimum error or not;
if the error is smaller than or equal to the preset minimum error, determining that the finger of the user is in contact with the learning area;
and if the error is larger than the preset minimum error, determining that the finger of the user is not in contact with the learning area, and outputting secondary prompt information, wherein the secondary prompt information is used for prompting the user of the electronic equipment that the finger of the user is not in contact with the learning area.
By implementing the implementation mode, whether the user feeds back the information of the contact between the user finger and the learning area in a voice mode can be detected when the fact that the user finger is not in contact with the learning area is detected for the first time, if the user feeds back the information, the electronic equipment can further measure the distance between the user finger and the ultrasonic sensor through the ultrasonic sensor, so that the electronic equipment detects the position relation between the user finger and the learning area through the ultrasonic sensor for the second time, and the accuracy of the electronic equipment for detecting the position relation between the user finger and the learning area is improved.
As an alternative implementation, the electronic device shown in fig. 5 may further include:
a third judging unit 408, configured to, when a judgment result of the second judging unit 406 is yes, judge whether the pressing operation is detected on the display screen of the electronic device;
a second determination unit 409, configured to determine that the finger of the user performs a touch operation on the display screen when the determination result of the third determination unit 408 is yes;
the identifying unit 410 is configured to identify an operation instruction corresponding to the touch operation determined by the second determining unit 409, and control the electronic device to execute a target operation corresponding to the operation instruction.
By implementing the embodiment, whether the user finger performs the touch operation on the display screen can be detected under the condition that the user finger is determined not to be in contact with the learning area, and the electronic device can think that the user finger can perform any operation on the display screen of the electronic device because the electronic device detects the area which is not in contact with the learning area but appears between the learning area and the image acquisition device, so that the sensitivity of detecting the operation performed on the display screen by the user is improved.
Therefore, by implementing the electronic device described in fig. 5, the click-to-read control of the electronic device can be realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of the finger position identification. In addition, the electronic device described in fig. 5 is implemented, so that the accuracy of detecting the position relationship between the user finger and the learning area by the electronic device is improved. In addition, the electronic device described in fig. 5 is implemented to improve the interactivity of the user with the electronic device. In addition, the electronic apparatus described in fig. 5 is implemented, and the sensitivity of detecting the operation performed by the user on the display screen is improved.
EXAMPLE six
Referring to fig. 6, fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. The electronic device shown in fig. 6 is optimized from the electronic device shown in fig. 5. The electronic device shown in fig. 6 may further include:
a first obtaining unit 411, configured to obtain a plurality of history images stored in advance before the first determining unit 403 determines whether the projection area is larger than a preset standard area, where each history image is an image of a user's finger in the learning area.
A second acquiring unit 412, configured to acquire a history projection area of the user's finger in the learning area in each history image acquired by the first acquiring unit 411.
A selecting unit 413, configured to select a smallest historical projection area from the plurality of historical projection areas acquired by the second acquiring unit 412 as a standard area.
In the embodiment of the invention, the pre-stored history images shot in the past can be acquired, the history images can also contain the fingers of the user and the learning area, the electronic equipment can calculate the projection area of the fingers of the user in the learning area in each history image, and the minimum projection area is determined as the standard area.
As an alternative implementation, the electronic device shown in fig. 6 may further include:
a third acquiring unit 414 configured to acquire the text content corresponding to the projection area in the learning region when the determination result of the first determining unit 403 is no;
a third determining unit 415, configured to determine the text content acquired by the third acquiring unit 414 as click-to-read content, and acquire a standard pronunciation of the click-to-read content;
and a broadcasting unit 416, configured to broadcast the standard pronunciation determined by the third determining unit 415 through a speaker of the electronic device.
By implementing the implementation of the implementation manner, under the condition that the finger of the user is determined to be in contact with the learning area, the text content corresponding to the projection area of the finger of the user in the learning area is acquired, the text content is used as the point reading content, and the standard pronunciation of the point reading content is output, so that the electronic equipment immediately acquires the pronunciation of the point reading content indicated by the user after the finger of the user is determined to be in contact with the learning area, and the response speed of the electronic equipment to the detected operation triggered by the user in the learning area is improved.
Therefore, by implementing the electronic device described in fig. 6, the click-to-read control of the electronic device can be realized according to the determined position of the finger, and the use experience of the user is improved on the basis of improving the accuracy of the finger position identification. In addition, implementing the electronic device described in fig. 6 may make the determined standard projected area more consistent with the area where the user's finger actually touches the learning area. In addition, the electronic device described in fig. 6 is implemented to improve the response speed of the electronic device to the detected operation triggered by the user in the learning area.
EXAMPLE seven
Referring to fig. 7, fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the disclosure. As shown in fig. 7, the electronic device may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
wherein, the processor 702 calls the executable program code stored in the memory 701 to execute part or all of the steps of the method in the above method embodiments.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium stores program codes, wherein the program codes comprise instructions for executing part or all of the steps of the method in the above method embodiments.
Embodiments of the present invention also disclose a computer program product, wherein, when the computer program product is run on a computer, the computer is caused to execute part or all of the steps of the method as in the above method embodiments.
The embodiment of the present invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "an embodiment of the present invention" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase "in embodiments of the invention" appearing in various places throughout the specification are not necessarily all referring to the same embodiments. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In addition, the terms "system" and "network" are often used interchangeably herein. It should be understood that the term "and/or" herein is merely one type of association relationship describing an associated object, meaning that three relationships may exist, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
The above-mentioned detailed description is made on the touch-and-read control method and the electronic device in the touch-and-read mode disclosed in the embodiments of the present invention, and a specific example is applied in this document to explain the principle and the implementation of the present invention, and the description of the above-mentioned embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A point reading control method in a point reading mode is characterized by comprising the following steps:
when the electronic equipment is in a point reading mode, shooting a target image when a finger of a user is in a learning area;
determining the projection area of the finger of the user in the learning area from the target image;
judging whether the projection area is larger than a preset standard area, wherein the standard area is a touch area acquired in advance when a finger touches the learning area;
and if so, controlling the electronic equipment to suspend broadcasting the content corresponding to the projection area in the learning area.
2. The method according to claim 1, wherein after determining that the projected area is larger than the preset standard area, the method further comprises:
calculating a current proportion of the projected area relative to the standard area;
judging whether the current proportion is larger than a preset proportion or not;
and if the preset proportion is less than or equal to the preset proportion, outputting prompt information, and executing the control of the electronic equipment to pause and broadcast the content corresponding to the projection area in the learning area, wherein the prompt information is used for prompting a user that the finger of the user is not in contact with the learning area.
3. The method of claim 2, wherein when the current ratio is determined to be greater than the preset ratio, the method further comprises:
judging whether a display screen of the electronic equipment detects a pressing operation;
if yes, determining that the user finger executes touch operation on the display screen;
and identifying an operation instruction corresponding to the touch operation, and controlling the electronic equipment to execute a target operation corresponding to the operation instruction.
4. The method according to any one of claims 1 to 3, wherein before determining whether the projected area is larger than a preset standard area, the method further comprises:
acquiring a plurality of pre-stored historical images, wherein each historical image is an image of the user finger in the learning area;
acquiring historical projection areas of the fingers of the user in the learning areas in the historical images;
and selecting the smallest historical projection area from a plurality of historical projection areas as a standard area.
5. The method according to any one of claims 1 to 4, wherein after determining that the projected area is less than or equal to the preset standard area, the method further comprises:
acquiring the text content corresponding to the projection area in the learning area;
determining the text content as click-to-read content, and acquiring standard pronunciation of the click-to-read content;
and broadcasting the standard pronunciation through a loudspeaker of the electronic equipment.
6. An electronic device, comprising:
the shooting unit is used for shooting a target image when the finger of the user is in a learning area when the electronic equipment is in a point reading mode;
a first determination unit configured to determine a projection area of the user's finger in the learning region from the target image;
the first judgment unit is used for judging whether the projection area is larger than a preset standard area, wherein the standard area is a touch area acquired in advance when a finger touches the learning area;
and the control unit is used for controlling the electronic equipment to suspend broadcasting the content corresponding to the projection area in the learning area when the judgment result of the first judgment unit is yes.
7. The electronic device of claim 6, further comprising:
the calculating unit is used for calculating the current proportion of the projection area relative to the standard area when the judgment result of the first judging unit is yes;
the second judging unit is used for judging whether the current proportion is larger than a preset proportion or not;
and the output unit is used for outputting prompt information and triggering the control unit to execute the control of the electronic equipment to suspend broadcasting of the content corresponding to the projection area in the learning area when the judgment result of the second judgment unit is negative, wherein the prompt information is used for prompting a user that the finger of the user is not in contact with the learning area.
8. The electronic device of claim 7, further comprising:
a third judging unit, configured to judge whether a pressing operation is detected on a display screen of the electronic device when a judgment result of the second judging unit is yes;
a second determining unit, configured to determine that the user finger performs a touch operation on the display screen when a determination result of the third determining unit is yes;
and the identification unit is used for identifying the operation instruction corresponding to the touch operation and controlling the electronic equipment to execute the target operation corresponding to the operation instruction.
9. The electronic device according to any one of claims 6 to 8, further comprising:
a first obtaining unit, configured to obtain a plurality of pre-stored history images before the first determining unit determines whether the projection area is larger than a preset standard area, where each history image is an image of the user finger in the learning area;
a second acquisition unit configured to acquire a history projection area of the user's finger in the learning area in each of the history images;
and the selecting unit is used for selecting the smallest historical projection area from the plurality of historical projection areas as a standard area.
10. The electronic device according to any one of claims 6 to 9, further comprising:
a third acquiring unit, configured to acquire text content corresponding to the projection area in the learning area if the determination result of the first determining unit is negative;
a third determining unit, configured to determine the text content as a click-to-read content, and acquire a standard pronunciation of the click-to-read content;
and the broadcasting unit is used for broadcasting the standard pronunciation through a loudspeaker of the electronic equipment.
CN201910498915.2A 2019-06-09 2019-06-09 Click-to-read control method in click-to-read mode and electronic equipment Active CN111077997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910498915.2A CN111077997B (en) 2019-06-09 2019-06-09 Click-to-read control method in click-to-read mode and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910498915.2A CN111077997B (en) 2019-06-09 2019-06-09 Click-to-read control method in click-to-read mode and electronic equipment

Publications (2)

Publication Number Publication Date
CN111077997A true CN111077997A (en) 2020-04-28
CN111077997B CN111077997B (en) 2023-08-25

Family

ID=70310067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910498915.2A Active CN111077997B (en) 2019-06-09 2019-06-09 Click-to-read control method in click-to-read mode and electronic equipment

Country Status (1)

Country Link
CN (1) CN111077997B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552367A (en) * 2020-05-25 2020-08-18 广东小天才科技有限公司 Click operation identification method, electronic equipment and storage medium
CN111711758A (en) * 2020-06-29 2020-09-25 广东小天才科技有限公司 Multi-pointing test question shooting method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104157171A (en) * 2014-08-13 2014-11-19 三星电子(中国)研发中心 Point-reading system and method thereof
CN105652567A (en) * 2016-04-13 2016-06-08 江南大学 Projection method
CN105652568A (en) * 2016-04-13 2016-06-08 江南大学 Projector
CN107248329A (en) * 2017-07-06 2017-10-13 广东小天才科技有限公司 A kind of point-of-reading device and its reading method
CN108268634A (en) * 2018-01-17 2018-07-10 广东小天才科技有限公司 It takes pictures and searching method, smart pen, search terminal and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104157171A (en) * 2014-08-13 2014-11-19 三星电子(中国)研发中心 Point-reading system and method thereof
CN105652567A (en) * 2016-04-13 2016-06-08 江南大学 Projection method
CN105652568A (en) * 2016-04-13 2016-06-08 江南大学 Projector
CN107248329A (en) * 2017-07-06 2017-10-13 广东小天才科技有限公司 A kind of point-of-reading device and its reading method
CN108268634A (en) * 2018-01-17 2018-07-10 广东小天才科技有限公司 It takes pictures and searching method, smart pen, search terminal and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552367A (en) * 2020-05-25 2020-08-18 广东小天才科技有限公司 Click operation identification method, electronic equipment and storage medium
CN111552367B (en) * 2020-05-25 2023-09-26 广东小天才科技有限公司 Click operation identification method, electronic equipment and storage medium
CN111711758A (en) * 2020-06-29 2020-09-25 广东小天才科技有限公司 Multi-pointing test question shooting method and device, electronic equipment and storage medium
CN111711758B (en) * 2020-06-29 2021-06-18 广东小天才科技有限公司 Multi-pointing test question shooting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111077997B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN109635772B (en) Dictation content correcting method and electronic equipment
CN111078083A (en) Method for determining click-to-read content and electronic equipment
CN109255989B (en) Intelligent touch reading method and touch reading equipment
CN111026949A (en) Question searching method and system based on electronic equipment
CN109086590B (en) Interface display method of electronic equipment and electronic equipment
CN111079494B (en) Learning content pushing method and electronic equipment
CN111081080B (en) Voice detection method and learning device
CN109086431B (en) Knowledge point consolidation learning method and electronic equipment
CN111077996A (en) Information recommendation method based on point reading and learning equipment
CN111077997A (en) Point reading control method in point reading mode and electronic equipment
CN111079499B (en) Writing content identification method and system in learning environment
CN111026924A (en) Method for acquiring content to be searched and electronic equipment
CN109634422B (en) Recitation monitoring method and learning equipment based on eye movement recognition
CN111078983B (en) Method for determining page to be identified and learning equipment
CN111722711B (en) Augmented reality scene output method, electronic device and computer readable storage medium
CN111027353A (en) Search content extraction method and electronic equipment
CN108280184B (en) Test question extracting method and system based on intelligent pen and intelligent pen
CN111724638A (en) AR interactive learning method and electronic equipment
CN108877773B (en) Voice recognition method and electronic equipment
CN111090383B (en) Instruction identification method and electronic equipment
CN111077990B (en) Method for determining content to be read on spot and learning equipment
CN111091034A (en) Multi-finger recognition-based question searching method and family education equipment
CN111077989A (en) Screen control method based on electronic equipment and electronic equipment
CN111090382B (en) Character content input method and terminal equipment
CN111553356A (en) Character recognition method and device, learning device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant