CN111638789A - Data output method and terminal equipment - Google Patents

Data output method and terminal equipment Download PDF

Info

Publication number
CN111638789A
CN111638789A CN202010482743.2A CN202010482743A CN111638789A CN 111638789 A CN111638789 A CN 111638789A CN 202010482743 A CN202010482743 A CN 202010482743A CN 111638789 A CN111638789 A CN 111638789A
Authority
CN
China
Prior art keywords
user
metadata
content
information
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010482743.2A
Other languages
Chinese (zh)
Inventor
崔颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010482743.2A priority Critical patent/CN111638789A/en
Publication of CN111638789A publication Critical patent/CN111638789A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a data output method and terminal equipment, which are applied to the technical field of terminals and can solve the problem that metadata of content cannot be output based on user pertinence. The method comprises the following steps: receiving input of a user for first content; outputting first metadata set corresponding to a user tag of a user in response to an input for first content; wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text; the user tag includes: at least one of grade information of the user and interest information of the user.

Description

Data output method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminals, in particular to a data output method and terminal equipment.
Background
At present, most of the home education devices (point reading machines or learning machines) on the market have a point reading function, and when a student encounters unknown content or unapproved content in the learning process, the student can use the home education device with the point reading function to read the unknown or unapproved content. However, currently, for the same content, the interpretation output after the click-to-read is usually pre-stored fixed metadata, and the interpretation is not distinguished for users, so that there may be a problem that some users can understand the click-to-read content if having strong understanding and learning abilities, and some users cannot understand the click-to-read content because of being students of lower grades, so in the prior art, the metadata of the content cannot be output based on the pertinence of the users.
Disclosure of Invention
The embodiment of the invention provides a data output method and terminal equipment, which are used for solving the problem that metadata of content cannot be output based on user pertinence in the prior art, and are realized as follows:
in a first aspect, a data output method is provided, the method including: receiving input of a user for first content;
outputting first metadata set corresponding to a user tag of the user in response to an input for the first content;
wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text;
the user tag includes: at least one of grade information of the user and interest information of the user.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, before the receiving the input of the first content by the user, the method further includes:
establishing a user tag of the user according to the basic information of the user;
the basic information of the user comprises at least one of the following: at least one of the information of the data input by the user, the subscription information of the user, the shopping information of the user, the search record information of the user and the comment information of the user is determined.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the outputting the first metadata corresponding to the user tag of the user, the method further includes:
receiving input by the user for the first metadata;
in response to the input for the first metadata, outputting an option to describe other metadata of the first content to facilitate user selection of the other metadata to describe the first content.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the outputting the option of the other metadata describing the first content, the method further includes:
responding to the input of the user for a target option, and displaying second metadata corresponding to the target option, wherein the second metadata and the first metadata are different metadata for describing the first content;
and correspondingly storing the second metadata and the user label of the user.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the storing the second metadata in correspondence with the user tag of the user, the method further includes:
displaying the option of the first metadata and the option of the second metadata in response to the user's input for the first content again.
In a second aspect, a terminal device is provided, which includes: the receiving module is used for receiving the input of a user for the first content;
an output module for outputting first metadata set corresponding to a user tag of the user in response to an input for the first content;
wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text;
the user tag includes: at least one of grade information of the user and interest information of the user.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the terminal device further includes:
the establishing module is used for establishing a user tag of a user according to the basic information of the user before the receiving module receives the input of the first content by the user;
the basic information of the user comprises at least one of the following: at least one of the information of the data input by the user, the subscription information of the user, the shopping information of the user, the search record information of the user and the comment information of the user is determined.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the receiving module is further configured to receive an input of the user for the first metadata after the output module outputs the first metadata corresponding to the user tag of the user;
the output module is further configured to output, in response to the input for the first metadata, an option for other metadata describing the first content, so that a user can select the other metadata describing the first content.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the terminal device further includes:
a display module, configured to display, in response to an input of a target option by the user after the output module outputs an option of other metadata describing the first content, second metadata corresponding to the target option, where the second metadata and the first metadata are different metadata for describing the first content;
and the storage module is used for correspondingly storing the second metadata and the user tag of the user.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the display module is further configured to display an option of the first metadata and an option of the second metadata after the saving module saves the second metadata in correspondence with the user tag of the user.
In a third aspect, a terminal device is provided, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the data output method in the first aspect of the embodiment of the present invention.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that causes a computer to execute the data output method in the first aspect of the embodiment of the present invention. The computer readable storage medium includes a ROM/RAM, a magnetic or optical disk, or the like.
In a fifth aspect, there is provided a computer program product for causing a computer to perform some or all of the steps of any one of the methods of the first aspect when the computer program product is run on the computer.
A sixth aspect provides an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the present invention, the terminal device may output, in response to the input of the first content by the user, the first metadata set corresponding to at least one of the information of the user's rank and the information of interest of the user, so that different metadata may be output for users of different ranks and different interests.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first schematic flowchart of a data output method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a data output method according to an embodiment of the present invention;
fig. 3 is a first schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram three of a terminal device according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware architecture of a point reading machine according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first metadata and the second metadata, etc. are used to distinguish different metadata, rather than to describe a specific order of metadata.
The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides a data output method and terminal equipment, wherein the terminal equipment can respond to the input of a user to first content and output first metadata which is correspondingly set with at least one of grade information of the user and interest information of the user, so that different grades can be aimed at. And users of different interests output different metadata.
The terminal device according to the embodiment of the present invention may be an electronic device such as a Mobile phone, a tablet computer, a notebook computer, a palmtop computer, a learning machine, a pointing and reading machine, a vehicle-mounted terminal device, a wearable device, an Ultra-Mobile Personal computer (UMPC), a netbook, or a Personal Digital Assistant (PDA). The wearable device may be a smart watch, a smart bracelet, a watch phone, or the like, and the embodiment of the present invention is not limited.
The execution main body of the data output method provided in the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the data output method in the terminal device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain a data output method provided by the embodiment of the present invention.
The data output method provided by the embodiment of the invention can be applied to the scene that the user of the student terminal equipment learns about the content which is not known or understood when the user adopts the terminal equipment to learn.
It should be noted that the execution main body of the data output method in the embodiment of the present invention may be a terminal device, or a functional module or a functional entity in a terminal device. The terminal device can be a point reading machine or a learning machine.
Example one
As shown in fig. 1, an embodiment of the present invention provides a data output method, which may include the following steps:
101. and establishing a user label of the user according to the basic information of the user.
Wherein, the basic information of the user comprises at least one of the following: the information of the user input data, the user subscription information, the user shopping information, the user search record information and the user comment information.
Optionally, the profile information input by the user refers to personal information filled in by the user in the point-to-read machine, and the information may include the grade information of the user.
In an alternative manner, it is also possible that the basic information of the user does not include the grade information, but includes the age information, and then the corresponding grade information may be matched according to the age information of the user to determine the age information of the user.
In an optional manner, in a process that the user uses the point reading machine, the point reading machine may log in an account of the user, and the basic information of the user may be information associated with the account that the user logs in.
Further, the interest information of the user may be information obtained by analyzing the basic information of the user. For example, the user's basic information is presented as: the student A, 6 years old, last grade, bought cartoon image's toy many times, subscribed the propaganda account number of a certain animation, searched for this certain animation relevant video many times, expressed in comment information oneself like the picture abundant, and had interesting study content. Then it can be analyzed that the user label of the user is: student A, a grade-one user, is interested in content which is interesting and rich in visual experience. When the user clicks and reads a certain content, for example, the word "panda", the terminal device may select metadata (e.g., animation or 3D model) of the content that is easy to understand, interesting, and rich in visual experience according to the user tag, and accordingly may output the animated image of the panda or output the 3D model of the panda.
102. An input of a user for first content is received.
One possible scenario is: the input for the first content may be input for a certain word, a sentence, a phrase or a certain symbol displayed in an e-learning page in the learning machine. The learning machine may detect this input through a pressure sensor on the learning machine screen.
Optionally, the input in the embodiment of the present invention may be a touch input of a finger or a stylus on the display screen.
Another possible scenario is: the input for the first content may be input for a word, a sentence, a phrase, or a symbol in a paper learning page. The learning machine can acquire images through the camera to determine whether the user has input for the first content in the learning page.
Optionally, the input in the embodiment of the present invention may also be an input in which a finger or a stylus indicates some content on a paper page.
103. In response to an input for the first content, first metadata set corresponding to a user tag of a user is determined.
Wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text.
The user tag includes: at least one of grade information of the user and interest information of the user.
It should be noted that, the learning machine may store the corresponding relationship between different user tags and different metadata, so that after the user tags of the users are determined, the first metadata may be determined through the stored corresponding relationship.
104. The first metadata is output.
In the embodiment of the present invention, the terminal device may output, in response to the input of the first content by the user, the first metadata set corresponding to at least one of the information of the user's rank and the information of interest of the user, so that different metadata may be output for users of different ranks and different interests.
Generally, when a teacher explains a certain knowledge point for students in a classroom, the content explained by each student is the same, which may cause that some students in different grades are class together, or when students with great interest difference class together, there may be a problem that some students cannot understand the content or do not have learning interest, so that the data output method provided by the embodiment of the invention can be applied to such a scene to output different metadata for different students, so as to improve the learning effect.
In a possible implementation manner, when a teacher wants to explain a first content to students, the first content may be sent to a terminal device of each student through a terminal device of the teacher, after the terminal device of each student receives the first content, the first content may be clicked, and metadata for describing the first content, which is set corresponding to a respective user tag, is presented on the respective terminal device, for example, some terminal devices of the same students play voice information for describing the first content, some terminal devices of the same students display text information for describing the first content, and some terminal devices of the same students display a 3D model for describing the first content. Therefore, the purpose of teaching according to the factors can be achieved, more flexible and personalized teaching can be realized, and the teaching effect can be improved.
In addition, some public information is often broadcasted in public places (such as buses, subways, malls, and the like), but the information may be ignored because the heard public may be somewhat unintelligible. The data output method provided by the embodiment of the invention can also assist the public to understand the information.
Illustratively, on a bus, the bus broadcasts information for a certain road repair, and information suggesting a detour or transfer of the vehicle for the road repair.
In a possible implementation manner, the terminal device arranged on the bus detects other terminal devices (which can be mobile phones of people who take the bus) within the same wireless local area network range with the terminal device, so that the contents of the broadcast information are sent to the mobile phones of passengers on the bus, and thus the passengers can check the contents of the broadcast information on their own mobile phones, and for some passengers who are unclear about routes, metadata corresponding to the user tags of the passengers can be acquired by clicking related contents. If the passenger is a primary school student of grade one, she may not know the detailed position of the intersection a, and then the passenger can click the word "intersection a" on his terminal device to trigger the terminal device to display the picture information of the intersection according with his comprehension ability and interest bias according to his grade information, or the 3D model information of the intersection, so as to assist the passenger to find the intersection a.
The data output method provided by the embodiment of the invention can help passengers to know information and provide auxiliary information to help passengers to transfer when the data output method is applied to a traffic police playing traffic information in a bus.
Example two
As shown in fig. 2, an embodiment of the present invention provides a data output method, including the following steps:
201. and establishing a user label of the user according to the basic information of the user.
202. An input of a user for first content is received.
203. In response to an input for the first content, first metadata set corresponding to a user tag of a user is determined.
204. The first metadata is output.
For the descriptions 201 to 204, reference may be made to the descriptions of 101 to 104 in the first embodiment, which are not described herein again.
In the embodiment of the present invention, the terminal device may output, in response to the input of the first content by the user, the first metadata set corresponding to at least one of the information of the user's rank and the information of interest of the user, so that different metadata may be output for users of different ranks and different interests.
205. An input of a user for first metadata is received.
Optionally, in a case that the outputting the first metadata is displaying the first metadata, 205 may be further performed, that is, performing touch control on the first metadata displayed on the screen of the terminal device.
206. In response to an input for the first metadata, an option to output other metadata describing the first content is output.
In the embodiment of the present invention, the option of outputting the other metadata describing the first content is to facilitate the user to select the other metadata describing the first content.
In the embodiment of the invention, the output of the first metadata can trigger the terminal equipment to display the options of other metadata, so that the operation of the user can be facilitated when the user wants to switch other metadata, and the man-machine interaction performance of the terminal equipment is improved.
207. And responding to the input of the user for the target option, and displaying second metadata corresponding to the target option.
Wherein the second metadata and the first metadata are different metadata for describing the first content.
208. And correspondingly storing the second metadata and the user label of the user.
In the embodiment of the invention, the second metadata selected by the user and the user label of the user can be correspondingly stored, so that the second metadata correspondingly stored with the user label can be displayed when the first content is read by a subsequent point.
The terminal device may also automatically switch the unused metadata by detecting.
In a possible implementation manner, after the step 205, the following steps may be included:
(1) and detecting that no input of the user is received for more than the preset time.
(2) An image is acquired through a camera, and a face image of a user is extracted from the image.
(3) And acquiring the size of the pupil from the face image, and judging whether the pupil of the user is enlarged by more than a preset proportion according to the acquired pupil size.
(4) If the magnification exceeds the preset proportion, the user is considered to have difficulty in understanding the first metadata, and switching to other metadata with lower difficulty than the first metadata can be triggered.
The pupil is usually dilated when the human eye encounters difficult problems. In the implementation mode, whether the current metadata have difficulty in understanding for the user can be judged by detecting the size of the pupil, and when the fact that the user has difficulty in understanding is judged, a metadata form which is simpler and easier to understand is displayed in a replacing mode, so that the man-machine interaction performance is improved.
Further, in the above (4), in the case that it is determined that the enlargement exceeds the preset ratio, it may be determined by the terminal device whether the change in the facial expression of the user affects the size of the pupil of the user due to the emotion of the user, and a person usually has a slightly enlarged pupil when the person is happy. If the facial expression of the user is detected to be changed greatly in the mode of acquiring the facial picture of the user, the emotion of the user is considered to have an influence on the pupil size of the user, and the metadata is not switched.
Furthermore, the preset proportion can be adjusted through different places where the terminal equipment is located. The method specifically comprises the following steps:
(1) and acquiring the position information of the terminal equipment.
(2) And judging whether the position of the terminal equipment is an indoor place or an outdoor place.
(3) If the indoor place is the public place (such as a shopping mall and a school) or the private place (such as a home), judging that the indoor place is the public place or the private place.
(4) In the case of a public place, the first ratio may be set as a preset ratio, and in the case of a private place, the second ratio may be set as a preset ratio.
Wherein the first proportion is greater than the second proportion, for example, the first proportion is 30% and the second proportion is 20%.
(5) And if the outdoor place is the outdoor place, taking the third proportion as a preset proportion.
Wherein the third ratio may be greater than the first ratio.
Since the pupil is easily stimulated by the outside (for example, light, happy events and the like) in the noisy environment, the location is judged, different preset proportion values are set for different locations, and the judgment accuracy can be improved.
Wherein the logic that sets different preset ratio values for different locations may set a larger preset ratio value for a noisier location.
209. In response to the user's input again for the first content, an option of the first metadata and an option of the second metadata are displayed.
In the embodiment of the invention, when the first content is input again, two kinds of metadata corresponding to the user tags can be displayed, so that the user can select the metadata type which the user wants to display. This allows the user to select prior to display to improve intelligence.
In a possible implementation manner, after the step 207, the first metadata originally corresponding to the user tag of the user may be replaced by the second metadata, so that when the user inputs the first content again, the second metadata corresponding to the user tag may be directly displayed without displaying the first metadata.
EXAMPLE III
As shown in fig. 3, an embodiment of the present invention provides a terminal device, where the terminal device includes:
a receiving module 301, configured to receive an input of a user for first content;
an output module 302 for outputting first metadata set corresponding to a user tag of a user in response to an input for first content;
wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text;
the user tag includes: at least one of grade information of the user and interest information of the user.
Optionally, the terminal device further includes:
the establishing module 303 is configured to establish a user tag of the user according to the basic information of the user before the receiving module 301 receives the input of the first content by the user;
the basic information of the user includes at least one of: the information of the user input data, the user subscription information, the user shopping information, the user search record information and the user comment information.
Optionally, the receiving module 301 is further configured to receive an input of the user for the first metadata after the output module 302 outputs the first metadata corresponding to the user tag of the user;
the output module 302 is further configured to output, in response to the input for the first metadata, an option for other metadata describing the first content to facilitate a user in selecting the other metadata describing the first content.
Optionally, with reference to fig. 3, as shown in fig. 4, the terminal device further includes:
a display module 304, configured to display, in response to an input of a target option by a user after the output module outputs an option of other metadata describing the first content, second metadata corresponding to the target option, the second metadata being different metadata describing the first content from the first metadata;
a saving module 305, configured to save the second metadata in correspondence with the user tag of the user.
Optionally, the display module 304 is further configured to correspondingly store the second metadata and the user tag of the user, and display the option of the first metadata and the option of the second metadata.
As shown in fig. 5, an embodiment of the present invention further provides a terminal device, where the terminal device may include:
a memory 401 storing executable program code;
a processor 402 coupled with the memory 402;
the processor 402 calls the executable program code stored in the memory 401 to execute the data output method executed by the terminal device in each of the above-mentioned method embodiments.
It should be noted that, the terminal device in the embodiment of the present invention may be a reader, and a schematic diagram of a winner architecture of the reader may be shown in fig. 6 as follows, where fig. 6 may include: radio Frequency (RF) circuitry 1110, memory 1120, input unit 1130, display unit 1140, sensors 1150, audio circuitry 1160, wireless fidelity (WiFi) module 1170, processor 1180, and power supply 1190. The rf circuit 1110 includes a receiver 1111 and a transmitter 1112. Those skilled in the art will appreciate that the point reader configuration shown in fig. 6 does not constitute a limitation of the point reader and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages to processor 1180; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuitry 1110 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), etc.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing of the point reading machine by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the stored data area may store data (such as audio data, a phonebook, etc.) created according to the use of the point reader, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the point reading machine. Specifically, the input unit 1130 may include a touch panel 1131 and other input devices 1132. Touch panel 1131, also referred to as a touch screen, can collect touch operations of a user on or near the touch panel 1131 (for example, operations of the user on or near touch panel 1131 by using any suitable object or accessory such as a finger or a stylus pen), and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1130 may include other input devices 1132 in addition to the touch panel 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by the user or information provided to the user and various menus of the point-and-read machine. The display unit 1140 may include a display panel 1141, and optionally, the display panel 1141 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 1131 can cover the display panel 1141, and when the touch panel 1131 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 1180 to determine the type of the touch event, and then the processor 1180 provides a corresponding visual output on the display panel 1141 according to the type of the touch event. Although in fig. 6, touch panel 1131 and display panel 1141 are two independent components to implement the input and output functions of the touch reader, in some embodiments, touch panel 1131 and display panel 1141 may be integrated to implement the input and output functions of the touch reader.
The point-and-read machine may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1141 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 1141 and/or the backlight when the pointing device moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the gesture of the point reader (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for the other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor and the like which can be configured on the point reading machine, the description is omitted here.
Audio circuitry 1160, speaker 1161, and microphone 1162 may provide an audio interface between a user and a point-and-read machine. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which are then processed by the audio data output processor 1180, and then sent to, for example, another point-to-point reader via the RF circuit 1110, or output to the memory 1120 for further processing.
WiFi belongs to short distance wireless transmission technology, and the point-to-read machine can help the user to send and receive e-mail, browse the webpage and visit the streaming media etc. through WiFi module 1170, and it provides wireless broadband internet access for the user. Although fig. 6 shows the WiFi module 1170, it is understood that it does not belong to the essential constitution of the point reader, and can be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 1180 is a control center of the point reading machine, and is connected to various parts of the whole point reading machine by using various interfaces and lines, and executes various functions of the point reading machine and processes data by running or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the point reading machine. Optionally, processor 1180 may include one or more processing units; preferably, the processor 1180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The reader also includes a power supply 1190 (e.g., a battery) for providing power to various components, and preferably, the power supply may be logically connected to the processor 1180 via a power management system, so that the power management system may manage charging, discharging, and power consumption management functions. Although not shown, the point reading machine may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
The point reading machine can execute all steps and processes in the method embodiment and achieve the same technical effect, and details are not described here.
Embodiments of the present invention provide a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention further provide an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product, when running on a computer, causes the computer to perform some or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
The terminal device provided by the embodiment of the present invention can implement each process shown in the above method embodiments, and is not described herein again to avoid repetition.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.

Claims (11)

1. A data output method is applied to terminal equipment, and is characterized by comprising the following steps:
receiving input of a user for first content;
outputting first metadata set corresponding to a user tag of the user in response to an input for the first content;
wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text;
the user tag includes: at least one of grade information of the user and interest information of the user.
2. The method of claim 1, wherein prior to receiving the user input for the first content, the method further comprises:
establishing a user tag of the user according to the basic information of the user;
the basic information of the user comprises at least one of the following: at least one of the information of the data input by the user, the subscription information of the user, the shopping information of the user, the search record information of the user and the comment information of the user is determined.
3. The method of claim 1 or 2, wherein after outputting the first metadata corresponding to the user tag of the user, the method further comprises:
receiving input by the user for the first metadata;
in response to the input for the first metadata, outputting an option to describe other metadata of the first content to facilitate user selection of the other metadata to describe the first content.
4. The method of claim 3, wherein after outputting the option to describe other metadata of the first content, the method further comprises:
responding to the input of the user for a target option, and displaying second metadata corresponding to the target option, wherein the second metadata and the first metadata are different metadata for describing the first content;
and correspondingly storing the second metadata and the user label of the user.
5. The method of claim 4, wherein after saving the second metadata in correspondence with the user tag of the user, the method further comprises:
displaying the option of the first metadata and the option of the second metadata in response to the user's input for the first content again.
6. A terminal device, comprising:
the receiving module is used for receiving the input of a user for the first content;
an output module for outputting first metadata set corresponding to a user tag of the user in response to an input for the first content;
wherein the first metadata is at least one type of metadata for describing the first content: 3D models, animations, video, pictures, audio and text;
the user tag includes: at least one of grade information of the user and interest information of the user.
7. The terminal device according to claim 6, wherein the terminal device further comprises:
the establishing module is used for establishing a user tag of a user according to the basic information of the user before the receiving module receives the input of the first content by the user;
the basic information of the user comprises at least one of the following: at least one of the information of the data input by the user, the subscription information of the user, the shopping information of the user, the search record information of the user and the comment information of the user is determined.
8. The terminal device according to claim 6 or 7,
the receiving module is further configured to receive an input of the user for the first metadata after the output module outputs the first metadata corresponding to the user tag of the user;
the output module is further configured to output, in response to the input for the first metadata, an option for other metadata describing the first content, so that a user can select the other metadata describing the first content.
9. The terminal device according to claim 7, wherein the terminal device further comprises:
a display module, configured to display, in response to an input of a target option by the user after the output module outputs an option of other metadata describing the first content, second metadata corresponding to the target option, where the second metadata and the first metadata are different metadata for describing the first content;
and the storage module is used for correspondingly storing the second metadata and the user tag of the user.
10. The terminal device of claim 9,
the display module is further configured to correspondingly store the second metadata and the user tag of the user by the storage module, and display the option of the first metadata and the option of the second metadata.
11. A computer-readable storage medium, comprising: computer program, which, when run on a computer, causes the computer to carry out the method according to any one of claims 1 to 5.
CN202010482743.2A 2020-05-29 2020-05-29 Data output method and terminal equipment Pending CN111638789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010482743.2A CN111638789A (en) 2020-05-29 2020-05-29 Data output method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010482743.2A CN111638789A (en) 2020-05-29 2020-05-29 Data output method and terminal equipment

Publications (1)

Publication Number Publication Date
CN111638789A true CN111638789A (en) 2020-09-08

Family

ID=72329719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010482743.2A Pending CN111638789A (en) 2020-05-29 2020-05-29 Data output method and terminal equipment

Country Status (1)

Country Link
CN (1) CN111638789A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586410A (en) * 2014-12-02 2015-05-06 惠州Tcl移动通信有限公司 Mobile terminal, wearable device and system and method for judging user state
CN106027485A (en) * 2016-04-28 2016-10-12 乐视控股(北京)有限公司 Rich media display method and system based on voice interaction
US20180365491A1 (en) * 2017-06-19 2018-12-20 Paypal, Inc. Content modification based on eye characteristics
CN109165336A (en) * 2018-08-23 2019-01-08 广东小天才科技有限公司 Information output control method and family education equipment
CN109255366A (en) * 2018-08-01 2019-01-22 北京科技大学 A kind of affective state regulating system for on-line study
CN110321474A (en) * 2019-05-21 2019-10-11 北京奇艺世纪科技有限公司 Recommended method, device, terminal device and storage medium based on search term

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586410A (en) * 2014-12-02 2015-05-06 惠州Tcl移动通信有限公司 Mobile terminal, wearable device and system and method for judging user state
CN106027485A (en) * 2016-04-28 2016-10-12 乐视控股(北京)有限公司 Rich media display method and system based on voice interaction
US20180365491A1 (en) * 2017-06-19 2018-12-20 Paypal, Inc. Content modification based on eye characteristics
CN109255366A (en) * 2018-08-01 2019-01-22 北京科技大学 A kind of affective state regulating system for on-line study
CN109165336A (en) * 2018-08-23 2019-01-08 广东小天才科技有限公司 Information output control method and family education equipment
CN110321474A (en) * 2019-05-21 2019-10-11 北京奇艺世纪科技有限公司 Recommended method, device, terminal device and storage medium based on search term

Similar Documents

Publication Publication Date Title
CN108111675B (en) Notification message processing method and device and mobile terminal
CN108090855B (en) Learning plan recommendation method and mobile terminal
CN108735216B (en) Voice question searching method based on semantic recognition and family education equipment
CN108763552B (en) Family education machine and learning method based on same
CN108595275B (en) Prompt message pushing method and terminal
CN111596818A (en) Message display method and electronic equipment
CN110830362B (en) Content generation method and mobile terminal
CN108877780B (en) Voice question searching method and family education equipment
CN106797336B (en) Method and device for displaying historical chat records
CN107908765B (en) Game resource processing method, mobile terminal and server
CN109189303B (en) Text editing method and mobile terminal
CN108595107B (en) Interface content processing method and mobile terminal
CN109495638B (en) Information display method and terminal
CN108307039B (en) Application information display method and mobile terminal
CN108540649B (en) Content display method and mobile terminal
CN110990679A (en) Information searching method and electronic equipment
CN110750198A (en) Expression sending method and mobile terminal
CN109634550A (en) A kind of voice operating control method and terminal device
JP2021532492A (en) Character input method and terminal
CN111639209B (en) Book content searching method, terminal equipment and storage medium
CN107944040B (en) Lyric display method and mobile terminal
CN111090482B (en) Content output method for simulating paper material display and electronic equipment
CN108108338B (en) Lyric processing method, lyric display method, server and mobile terminal
CN108897508B (en) Voice question searching method based on split screen display and family education equipment
CN108920539B (en) Method for searching answers to questions and family education machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination