CN111459587A - Information display method, device, equipment and storage medium - Google Patents

Information display method, device, equipment and storage medium Download PDF

Info

Publication number
CN111459587A
CN111459587A CN202010232171.2A CN202010232171A CN111459587A CN 111459587 A CN111459587 A CN 111459587A CN 202010232171 A CN202010232171 A CN 202010232171A CN 111459587 A CN111459587 A CN 111459587A
Authority
CN
China
Prior art keywords
target
user
reading difficulty
age group
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010232171.2A
Other languages
Chinese (zh)
Inventor
任杰
王琳
郭凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuxun Technology Co Ltd
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010232171.2A priority Critical patent/CN111459587A/en
Publication of CN111459587A publication Critical patent/CN111459587A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information display method, an information display device, information display equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: acquiring a face image of a target user; determining a target age group to which a target user belongs and a target reading difficulty factor corresponding to the target user according to the face image, wherein the target reading difficulty factor is used for indicating the reading difficulty corresponding to the target user; determining the font size matched with the target user according to the target age group and the target reading difficulty factor; and displaying the information of the font size matched with the target user. The method and the device are simple and convenient to operate, can automatically determine the font size matched with the user, and solve the problem of reading obstacles encountered by old people when using the Internet product, so that the Internet product can show different styles aiming at user groups at different ages, and the diversity is high.

Description

Information display method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an information display method, an information display device, a standby device and a storage medium.
Background
With the development of the times, users in different age groups can use the terminal to meet the requirements of the users.
In the related art, users in different age groups have different requirements on the word size, and the larger the age of the user is, the larger the required word size is. When the currently displayed font size does not meet the requirements of the user, the user needs to adjust the font size, the application program does not support font size adjustment, the user needs to manually open system setting, and the font size is adjusted under the display setting function of the system setting.
Disclosure of Invention
The embodiment of the application provides an information display method, an information display device, information display equipment and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an information display method, where the method includes:
acquiring a face image of a target user;
determining a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user according to the face image, wherein the target reading difficulty factor is used for indicating the reading difficulty corresponding to the target user;
determining the font size matched with the target user according to the target age group and the target reading difficulty factor;
and displaying the information of the font size matched with the target user.
In another aspect, an embodiment of the present application provides an information display apparatus, where the apparatus includes:
the image acquisition module is used for acquiring a face image of a target user;
the age group determining module is used for determining a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user according to the face image, wherein the target reading difficulty factor is used for indicating the reading difficulty corresponding to the target user;
the word size determining module is used for determining the word size matched with the target user according to the target age group and the target reading difficulty factor;
and the information display module is used for displaying the information of the character size matched with the target user.
In another aspect, an embodiment of the present application provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the information display method described above.
In still another aspect, an embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the above-mentioned information display method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
through obtaining user's face image, confirm the age bracket that the user belongs to and the corresponding reading degree of difficulty factor of user, thereby confirm the type size with this user looks adaptation, and show the information of this type size, it sets up the entry and carries out the typeface setting to need the manual typeface of finding the typeface of user to set up in the correlation technique to compare, lead to the inconvenient problem of user operation older to lead to, the embodiment of this application is easy and simple to handle, can realize the type size of automatic determination and user looks adaptation, the reading obstacle problem that meets when having solved old person user and using internet products, make internet products can show different styles to the user crowd of different ages, and the diversity is high.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by one embodiment of the present application;
FIG. 2 is a flow chart of an information display method provided by an embodiment of the present application;
FIG. 3 is a flow chart of an information display method provided by another embodiment of the present application;
FIG. 4 is a schematic diagram of a feature processing model provided by one embodiment of the present application;
FIG. 5 is a flow chart of an information display method provided by another embodiment of the present application;
FIG. 6 is a schematic diagram of an information display method provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an information display method provided by another embodiment of the present application;
FIG. 8 is a block diagram of an information display device provided in one embodiment of the present application;
fig. 9 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of methods consistent with aspects of the present application, as detailed in the appended claims.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application. The implementation environment may include: a terminal 10 and a server 20.
Illustratively, the terminal 10 may be an electronic device such as a mobile phone, a tablet Computer, a PC (Personal Computer), a wearable device, or the like. At least one application program may be installed on the terminal 10, for example, the application program may be a social application program, a video playing application program, a shopping application program, or a news reading application program, and the embodiment of the present application is not limited to the type and the number of the application programs installed on the terminal 10. Optionally, a camera is disposed on the terminal 10, and the camera is used for acquiring a face image of the user.
The server 20 may be one server, a server cluster composed of a plurality of servers, or a cloud server.
The terminal 10 and the server 20 may communicate with each other through a network, which may be a wireless network or a wired network, and the embodiment of the present application is not limited thereto.
The technical solution provided in the embodiment of the present application may be executed by the terminal 10, or may be executed by the server 20, or may be executed by the terminal 10 and the server 20 in an interactive cooperation manner, which is not limited in the embodiment of the present application.
For convenience of description, the following description will be given taking an execution subject of each step as an example of a computer device, which is an electronic device having computing and processing capabilities.
Referring to fig. 2, a flowchart of an information display method according to an embodiment of the present application is shown. The method may include the steps of:
step 201, acquiring a face image of a target user.
The face image can be a face video or a face picture, and the type of the face image is not limited in the embodiment of the application.
Optionally, the terminal acquires a face image of the user through the camera, for example, the terminal may invoke the camera to acquire a face video of the user in a video mode, or the terminal may invoke the camera to acquire a face picture of the user in a continuous shooting mode or a common shooting mode.
Step 202, determining a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user according to the face image.
The target age group refers to an age interval in which the age of the target user is located. For example, the target age groups may include 0-12 years, 12-35 years, 35-50 years, 50-70 years, 70-100 years, and the like. The target reading difficulty factor is used for indicating the reading difficulty corresponding to the target user. Illustratively, the value range of the reading difficulty factor is 0-1, and the larger the value is, the larger the reading difficulty factor is, the higher the reading difficulty degree corresponding to the user is. Optionally, the reading difficulty factor is proportional to age, the greater the reading difficulty factor; the smaller the age, the smaller the reading difficulty factor.
And step 203, determining the font size matched with the target user according to the target age group and the target reading difficulty factor.
Font size is used to indicate the size of the font, for example, font size may include four, three, etc. The font size may be different for different users. For example, a font size for a 20 year old user may be 12pt, while a font size for a 60 year old user may be 18 pt.
And step 204, displaying the information of the font size matched with the target user.
After the font size adapted to the user is determined, the terminal may display information of the font size, where the information may be a text or a picture or contents in other forms, and this is not limited in this embodiment of the application. For example, if the font size adapted to the target user is 14pt, the terminal may display 14pt sized text.
To sum up, among the technical scheme that this application embodiment provided, through the facial image that acquires the user, determine the affiliated age bracket of user and the corresponding reading degree of difficulty factor of user, thereby confirm the font with this user's looks adaptation, and show the information of this font, compare in the correlation technique and need the manual typeface of user to find the typeface and set up the entry and carry out the typeface setting, lead to the inconvenient problem of user operation older, this application embodiment is easy and simple to handle, can realize the font with user's looks adaptation of automatic determination, the reading obstacle problem that meets when having solved old person user and using the internet product, make the internet product can show different patterns to the user crowd of different ages, and the diversity is high.
In addition, when the terminal temporarily changes the operation user, different character sizes can be determined according to different users and the information of the character sizes can be displayed, and the effect of thousands of people and thousands of faces can be conveniently realized.
In an exemplary embodiment, the font size that is appropriate for the target user may be determined by:
firstly, determining a target font size interval corresponding to a target age range from a preset corresponding relation.
The preset corresponding relationship includes a corresponding relationship between at least one group of age groups and the size intervals, and the preset corresponding relationship may be presented in a form of a table or a form of a function, which is not limited in the embodiment of the present application.
The target word size interval is an interval between the minimum word size corresponding to the target age group and the maximum word size corresponding to the target age group. The corresponding character size intervals of different age groups are different. Optionally, the computer device obtains the font size intervals corresponding to different age groups in a deep learning manner by using user data of the terminal.
Alternatively, the computer device determines a target word size interval corresponding to the target age segment by a fontRange function, the fontRange function having an input of the age segment and an output of the fontRange function of the word size interval. Illustratively, the fontRange function may be as follows:
Figure BDA0002429600060000051
wherein fontRange (agengroup) represents fontRange function, agengroup represents age group, minFont1 represents minimum word size corresponding to age group 0-12, maxFont1 represents maximum word size corresponding to age group 0-12, minFont2 represents minimum word size corresponding to age group 12-35, maxFont2 represents maximum word size corresponding to age group 12-35, minFont3 represents minimum word size corresponding to age group 35-50, maxFont3 represents maximum word size corresponding to age group 35-50, minFont4 represents minimum word size corresponding to age group 50-70, maxFont4 represents maximum word size corresponding to age group 50-70, minFont5 represents minimum word size corresponding to age group 70-100, and maxFont5 represents maximum word size corresponding to age group 70-100.
And secondly, determining a reference character size corresponding to the target character size interval.
The reference word size refers to the word size standard corresponding to the age group. Optionally, performing target operation on the word size included in the target word size interval to obtain a reference word size, where the target operation includes any one of: taking median operation and taking mean value operation. And the reference character size is obtained by carrying out median operation or mean operation on the character sizes included in the target character size interval, and the determination of the reference character size is simple. In a possible implementation manner, the computer device may further obtain the reference font size by taking a normal distribution characteristic and the like.
For example, the reference word size obtained by averaging the minimum word size and the maximum word size included in the word size interval may be expressed as follows:
Figure BDA0002429600060000061
where fontBase denotes a reference character number, minFont denotes a lower boundary (minimum character number) of a character number section, and maxfent denotes an upper boundary (maximum character number) of a character number section.
Thirdly, determining the font size matched with the target user according to the reference font size and the target reading difficulty factor.
Optionally, the product of the reference word size and the target reading difficulty factor is determined as the word size adapted to the target user.
Illustratively, the font size adapted to the user can be calculated by the following formula:
Figure BDA0002429600060000062
where fontSize represents a font size adapted to the user, λ () represents a font scaling function, fontBase represents a reference font size, fontarge represents a font size interval, and u represents a reading difficulty factor.
In summary, in the technical scheme provided in the embodiment of the present application, the target font size interval corresponding to the target age zone is determined from the preset corresponding relationship, the reference font size corresponding to the target font size interval is determined, and the font size adapted to the target user is determined according to the reference font size and the target reading difficulty factor, so that the font size is automatically determined.
Referring to fig. 3, a flowchart of an information display method according to another embodiment of the present application is shown, where the method may include the following steps:
step 301, acquiring a face image of a target user.
For the description of step 301, reference may be made to the above embodiments, which are not described herein again.
Step 302, determining the face feature information of the target user according to the face image.
The face feature information is used for representing the age feature of the user. The face feature information may include at least one of: squinting degree, raised head line, fishtail line, eye pouch, pupil color, skin color, hair color, etc. Optionally, the facial feature information may include a plurality of labels (labels), where the squinting degree corresponds to one label, the head raising line corresponds to one label, the fishtail line corresponds to one label, the pouch corresponds to one label, the pupil color corresponds to one label, the skin color corresponds to one label, and the hair color corresponds to one label.
Alternatively, the computer device may acquire the face feature information from the face image through an algorithm such as CNN (convolutional Neural Network).
And step 303, calling a feature processing model to process the face feature information to obtain a target age group and a target reading difficulty factor.
The feature processing model refers to a model for processing a feature. In the embodiment of the present application, the feature processing model refers to a model for processing face feature information. Optionally, the feature processing model utilizes a multi-classification algorithm of a deep learning multi-layer neural network.
Illustratively, as shown in fig. 4, the feature processing model 40 includes an input layer, a hidden layer, a softmax (flexible maximum processing) layer, and an output layer. The face feature information is input into an input layer of the feature processing model 40, and after a series of processing, a target age group and a target reading difficulty factor are output by an output layer.
Optionally, before calling the feature processing model, the feature processing model needs to be trained, and the training process may include the following steps:
1. and acquiring training data of the feature processing model.
In the embodiment of the application, the training data comprises at least one training sample, and the training sample comprises face feature information of a training user, an age group to which the training user belongs, and a reading difficulty factor corresponding to the training user. The age bracket of the training user and the reading difficulty factor corresponding to the training user can be manually labeled.
2. And obtaining the predicted age group to which the training user belongs and the predicted reading difficulty factor corresponding to the training user based on the face feature information of the training user.
And inputting the face feature information of the training user into the feature processing model to obtain the predicted age group to which the training user belongs and the predicted reading difficulty factor corresponding to the training user.
3. And training the feature processing model according to the predicted age bracket, the predicted reading difficulty factor, the age bracket to which the training user belongs and the reading difficulty factor corresponding to the training user.
Optionally, when the predicted age group and the age group to which the training user belongs are matched, and the loss function between the prediction difficulty factor and the reading difficulty factor corresponding to the training user is smaller than a preset threshold, stopping training the feature processing model. Or stopping training the feature processing model when the training times reach the preset times.
According to the embodiment of the application, the training data of the feature processing model are obtained, the feature processing model is trained according to the training data, and the feature processing model obtained through training is high in precision.
And step 304, determining the font size matched with the target user according to the target age group and the target reading difficulty factor.
And 305, displaying the information of the font size matched with the target user.
The descriptions of steps 304 to 305 can be found in the above embodiments, and are not repeated herein.
In summary, in the technical scheme provided in the embodiment of the present application, the target age group and the target reading difficulty factor are obtained by determining the face feature information of the target user according to the face image and calling the feature processing model to process the face feature information, and the target age group and the target reading difficulty factor are determined according to the face feature information, so that the determination of the target age group and the target reading difficulty factor is more accurate.
In an exemplary embodiment, the facial image includes a facial video. At this time, as shown in fig. 5, the information display method provided in the embodiment of the present application may include the following steps:
step 501, judging whether the target user is a real-name user. If the target user is a real-name user, the method starts from step 502; if the target user is not a real-name user, execution begins at step 504.
A real-name user refers to a user who has undergone real-name authentication. For example, if the user account of the user is bound to the user's mobile phone number, it may be determined that the user is a real-name user.
Step 502, determining a target age group to which the target user belongs according to the real name information of the target user.
The real-name information includes the birth year and month of the target user, and the computer device can determine the target age bracket to which the target user belongs according to the current year and month and the birth year and month of the target user.
And 503, displaying the information of the preset font size corresponding to the target age group.
The preset word size can be determined by deep learning of a large amount of user data.
According to the embodiment of the application, when the user is the real-name user, the target age group to which the user belongs can be automatically determined, the information of the preset word size corresponding to the target age group is displayed, the operation is simple and convenient, and the word size is reasonably and accurately determined.
And step 504, confirming whether an authorized shooting instruction for the camera of the terminal is received. If the authorized photographing instruction is not received, go to step 505; if the authorized photographing instruction is received, the execution is started from step 506.
The authorized photographing instruction is an instruction for authorizing the camera to photograph. Optionally, the user triggers the authorized shooting instruction through voice, gesture and touch operations.
And 505, displaying the information of the default word size of the system.
The system default word size is the system default word size. The default font size of the system is set in the system settings.
Step 506, acquiring a face video of the target user.
And 507, performing frame extraction processing on the face video to obtain n image frames, wherein n is a positive integer.
And step 508, respectively performing feature extraction processing on the n image frames to obtain n pieces of face feature information.
The feature extraction processing is processing for extracting face feature information from an image frame. Optionally, the image frame is subjected to a feature extraction process by a CNN (convolutional Neural Network) algorithm. Each image frame has corresponding face feature information, and the face feature information corresponding to each image frame may be the same or different.
Step 509, calling the feature processing model to process the n pieces of face feature information to obtain n age groups and n reading difficulty factors.
And step 510, obtaining a target age group and a target reading difficulty factor according to the n age groups and the n reading difficulty factors.
Optionally, the computer device inputs the n age groups and the n reading difficulty factors into a flexible maximum processing layer of the feature processing model to obtain the target age group and the target reading difficulty factor.
In a possible implementation mode, the computer device performs feature extraction processing on the n image frames to obtain 1 piece of face feature information, and calls a feature processing model to process the 1 piece of face feature information to obtain a target age group and a target reading difficulty factor.
And 511, determining the font size matched with the target user according to the target age group and the target reading difficulty factor.
And step 512, displaying the information of the font size matched with the target user.
The above descriptions of steps 511 to 512 can refer to the above embodiments, and are not repeated herein.
The steps 501 to 512 may be executed by the terminal, or may be executed by the terminal and the server interactively, for example, after the user authorizes the terminal, the terminal records 1 second face video by calling a camera, uploads the face video to the server, and is not stored for privacy protection, but is deleted after being used temporarily. The server analyzes the face video to obtain a plurality of image frames. And performing feature extraction processing on each image frame to obtain face feature information, further performing deep face feature recognition, and outputting age group information and a reading difficulty factor.
In summary, the embodiment of the application judges whether the user is the real-name user, if not, determines whether the user agrees to call the camera to shoot the face of the user, and if so, obtains the face video of the user, and is simple and convenient to operate and good in user experience.
In addition, the embodiment of the application obtains n image frames by performing frame extraction processing on the face video; respectively carrying out feature extraction processing on the n image frames to obtain n pieces of face feature information; calling a feature processing model to process the n pieces of personal face feature information to obtain n age groups and n reading difficulty factors, and finally inputting the n age groups and the n reading difficulty factors into a softmax layer to obtain a target age group and target reading difficulty factors.
With combined reference to fig. 6 and fig. 7, schematic diagrams of an information display method provided by an embodiment of the present application are shown. The user clicks an icon 600 of an application program 'take-out' installed in the terminal to open the application program 'take-out', the application program 'take-out' judges whether the user is a real-name user of the user, if the user is not the real-name user of the user, the application program 'take-out' displays a prompt box 610 of 'whether the camera is allowed to be used', the prompt box 610 further comprises a 'forbidding' selection control 611 and a 'allowing' selection control 612, and when the terminal receives a confirmation instruction of the 'allowing' selection control 612, the terminal collects a face video of the user through the camera and sends the face video to the server. After receiving the face video, the server performs frame extraction processing on the face video to obtain 24 image frames; then, respectively carrying out feature extraction processing on the 24 image frames to obtain 24 pieces of face feature information; then, calling a feature processing model to process the 24 pieces of face feature information to obtain that the age bracket to which the user finally belongs is 50-70 and the reading difficulty factor corresponding to the user is 0.7, sending the age bracket 50-70 and the reading difficulty factor 0.7 to the terminal by the server, after receiving the age bracket 50-70 and the reading difficulty factor 0.7, determining that the word size to be displayed by the final application program is 18pt according to the age bracket 50-70 and the reading difficulty factor 0.7, and displaying the information of the word size 18 pt.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 8, a block diagram of an information display device according to an embodiment of the present application is shown. The apparatus 800 has functions of implementing method examples of the information display method, and the functions may be implemented by hardware or by hardware executing corresponding software. The apparatus 800 may be the computer device described above, or may be provided on a computer device. The apparatus 800 may include: an image acquisition module 810, an age group determination module 820, a font size determination module 830, and an information display module 840.
And an image obtaining module 810, configured to obtain a face image of the target user.
An age group determining module 820, configured to determine, according to the face image, a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user, where the target reading difficulty factor is used to indicate a reading difficulty corresponding to the target user.
And a font size determining module 830, configured to determine a font size adapted to the target user according to the target age group and the target reading difficulty factor.
And the information display module 840 is used for displaying the information of the character size matched with the target user.
To sum up, among the technical scheme that this application embodiment provided, through the facial image that acquires the user, determine the affiliated age bracket of user and the corresponding reading degree of difficulty factor of user, thereby confirm the font with this user's looks adaptation, and show the information of this font, compare in the correlation technique and need the manual typeface of user to find the typeface and set up the entry and carry out the typeface setting, lead to the inconvenient problem of user operation older, this application embodiment is easy and simple to handle, can realize the font with user's looks adaptation of automatic determination, the reading obstacle problem that meets when having solved old person user and using the internet product, make the internet product can show different patterns to the user crowd of different ages, and the diversity is high.
Optionally, the font size determining module 830 includes: a section determination unit and a font size determination unit (not shown in the figure).
The interval determining unit is used for determining a target word size interval corresponding to the target age group from a preset corresponding relation, wherein the preset corresponding relation comprises a corresponding relation between at least one group of age groups and word size intervals, and the target word size interval refers to an interval from a minimum word size corresponding to the target age group to a maximum word size corresponding to the target age group;
the character size determining unit is used for determining a reference character size corresponding to the target character size interval;
the word size determining unit is further configured to determine a word size adapted to the target user according to the reference word size and the target reading difficulty factor.
Optionally, the word size determining unit is configured to:
performing target operation on the word size included in the target word size interval to obtain the reference word size, wherein the target operation includes any one of the following items: taking median operation and taking mean value operation.
Optionally, the age group determination module 820 includes: an information determination unit and an age group determination unit (not shown in the figure).
The information determining unit is used for determining the face feature information of the target user according to the face image;
and the age group determining unit is used for calling a feature processing model to process the face feature information to obtain the target age group and the target reading difficulty factor.
Optionally, the face image comprises a face video;
the information determination unit is configured to:
performing frame extraction processing on the face video to obtain n image frames, wherein n is a positive integer;
respectively performing feature extraction processing on the n image frames to obtain n pieces of face feature information;
the age group determination unit is configured to:
calling the feature processing model to process the n pieces of face feature information to obtain n age groups and n reading difficulty factors;
and obtaining the target age group and the target reading difficulty factor according to the n age groups and the n reading difficulty factors.
Optionally, the apparatus 800 further includes: a data acquisition module, an age group prediction module, and a model training module (not shown).
The data acquisition module is used for acquiring training data of the feature processing model, wherein the training data comprises at least one training sample, and the training sample comprises face feature information of a training user, an age group to which the training user belongs and a reading difficulty factor corresponding to the training user;
the age group prediction module is used for obtaining the predicted age group of the training user and the predicted reading difficulty factor corresponding to the training user based on the face feature information of the training user;
and the model training module is used for training the feature processing model according to the predicted age bracket, the predicted reading difficulty factor, the age bracket to which the training user belongs and the reading difficulty factor corresponding to the training user.
Optionally, the face image comprises a face video;
the apparatus 800, further comprising: a user judgment module and an instruction confirmation module (not shown in the figure).
The user judgment module is used for judging whether the target user is a real-name user;
the instruction confirming module is used for responding to the fact that the target user is not a real-name user, and confirming whether an authorized shooting instruction for the camera of the terminal is received or not, wherein the authorized shooting instruction is an instruction used for authorizing the camera to shoot;
the information display module 840 is further configured to display information of a default word size of the system in response to that the authorized shooting instruction is not received;
the image obtaining module 810 is further configured to, in response to receiving the authorized shooting instruction, perform the step of obtaining the facial image of the target user.
Optionally, the age group determining module 820 is further configured to, in response to that the target user is a real-name user, determine a target age group to which the target user belongs according to real-name information of the target user;
the information display module 840 is further configured to display information of a preset font size corresponding to the target age group.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 9, a block diagram of a computer device according to an embodiment of the present application is shown. The computer device is used for implementing the information display method provided in the above embodiment. Specifically, the method comprises the following steps:
the computer apparatus 900 includes a CPU (Central Processing Unit) 901, a system Memory 904 including a RAM (Random Access Memory) 902 and a ROM (Read-Only Memory) 903, and a system bus 905 connecting the system Memory 904 and the Central Processing Unit 901. The computer device 900 also includes a basic I/O (Input/Output) system 906, which facilitates the transfer of information between devices within the computer, and a mass storage device 907 for storing an operating system 913, application programs 914, and other program modules 912.
The basic input/output system 906 includes a display 908 for displaying information and an input device 909 such as a mouse, keyboard, etc. for user input of information. Wherein the display 908 and the input device 909 are connected to the central processing unit 901 through an input output controller 910 connected to the system bus 905. The basic input/output system 906 may also include an input/output controller 910 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 910 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 907 is connected to the central processing unit 901 through a mass storage controller (not shown) connected to the system bus 905. The mass storage device 907 and its associated computer-readable media provide non-volatile storage for the computer device 900. That is, the mass storage device 907 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM (Compact disk Read-Only Memory) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 904 and mass storage device 907 described above may be collectively referred to as memory.
According to various embodiments of the present application, the computer device 900 may also operate as a remote computer connected to a network via a network, such as the Internet. That is, the computer device 900 may be connected to the network 912 through the network interface unit 911 coupled to the system bus 905, or the network interface unit 911 may be used to connect to other types of networks or remote computer systems (not shown).
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program, which is loaded and executed by a processor to implement the above-described information display method.
In an exemplary embodiment, a computer program product for implementing the above information display method when executed by a processor is also provided.
It should be understood that reference to "a plurality" herein means two or more. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (11)

1. An information display method, characterized in that the method comprises:
acquiring a face image of a target user;
determining a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user according to the face image, wherein the target reading difficulty factor is used for indicating the reading difficulty corresponding to the target user;
determining the font size matched with the target user according to the target age group and the target reading difficulty factor;
and displaying the information of the font size matched with the target user.
2. The method of claim 1, wherein determining the font size adapted to the target user according to the target age group and the target reading difficulty factor comprises:
determining a target word size interval corresponding to the target age group from a preset corresponding relation, wherein the preset corresponding relation comprises the corresponding relation between at least one group of age groups and word size intervals, and the target word size interval is an interval from the minimum word size corresponding to the target age group to the maximum word size corresponding to the target age group;
determining a reference character size corresponding to the target character size interval;
and determining the word size matched with the target user according to the reference word size and the target reading difficulty factor.
3. The method of claim 2, wherein the determining the reference font size corresponding to the target font size interval comprises:
performing target operation on the word size included in the target word size interval to obtain the reference word size, wherein the target operation includes any one of the following items: taking median operation and taking mean value operation.
4. The method of claim 1, wherein the determining a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user according to the face image comprises:
determining the face feature information of the target user according to the face image;
and calling a feature processing model to process the face feature information to obtain the target age group and the target reading difficulty factor.
5. The method of claim 4, wherein the facial image comprises a facial video;
the determining the face feature information of the target user according to the face image includes:
performing frame extraction processing on the face video to obtain n image frames, wherein n is a positive integer;
respectively performing feature extraction processing on the n image frames to obtain n pieces of face feature information;
the calling of the feature processing model to process the face feature information to obtain the target age group and the target reading difficulty factor comprises the following steps:
calling the feature processing model to process the n pieces of face feature information to obtain n age groups and n reading difficulty factors;
and obtaining the target age group and the target reading difficulty factor according to the n age groups and the n reading difficulty factors.
6. The method of claim 4, wherein before the invoking of the feature processing model to process the facial feature information to obtain the target age group and the target reading difficulty factor, further comprising:
acquiring training data of the feature processing model, wherein the training data comprises at least one training sample, and the training sample comprises face feature information of a training user, an age group to which the training user belongs and a reading difficulty factor corresponding to the training user;
obtaining a predicted age group to which the training user belongs and a predicted reading difficulty factor corresponding to the training user based on the face feature information of the training user;
and training the feature processing model according to the predicted age bracket, the predicted reading difficulty factor, the age bracket to which the training user belongs and the reading difficulty factor corresponding to the training user.
7. The method of claim 1 or 5, wherein the face image comprises a face video;
before the obtaining of the face image of the target user, the method further includes:
judging whether the target user is a real-name user or not;
responding to the fact that the target user is not a real-name user, and confirming whether an authorized shooting instruction for a camera of the terminal is received or not, wherein the authorized shooting instruction is an instruction used for authorizing the camera to shoot;
responding to the situation that the authorized shooting instruction is not received, and displaying information of a default word size of the system;
and responding to the received authorized shooting instruction, and executing the step of acquiring the face image of the target user.
8. The method of claim 7, wherein after determining whether the target user is a real-name user, further comprising:
in response to the target user being a real-name user, determining a target age group to which the target user belongs according to real-name information of the target user;
and displaying the information of the preset font size corresponding to the target age group.
9. An information display apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a face image of a target user;
the age group determining module is used for determining a target age group to which the target user belongs and a target reading difficulty factor corresponding to the target user according to the face image, wherein the target reading difficulty factor is used for indicating the reading difficulty corresponding to the target user;
the word size determining module is used for determining the word size matched with the target user according to the target age group and the target reading difficulty factor;
and the information display module is used for displaying the information of the character size matched with the target user.
10. A computer device, characterized in that it comprises a processor and a memory, in which a computer program is stored, which computer program is loaded and executed by the processor to implement the information display method according to any one of claims 1 to 8.
11. A computer-readable storage medium, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the information display method according to any one of claims 1 to 8.
CN202010232171.2A 2020-03-27 2020-03-27 Information display method, device, equipment and storage medium Withdrawn CN111459587A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010232171.2A CN111459587A (en) 2020-03-27 2020-03-27 Information display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010232171.2A CN111459587A (en) 2020-03-27 2020-03-27 Information display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111459587A true CN111459587A (en) 2020-07-28

Family

ID=71683542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010232171.2A Withdrawn CN111459587A (en) 2020-03-27 2020-03-27 Information display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111459587A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508606A (en) * 2011-11-10 2012-06-20 广东步步高电子工业有限公司 Method and system for subdividing belonged groups of users by face recognition and setting corresponding functions of mobile handsets
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 Terminal screen control method and device and electronic equipment
CN107528972A (en) * 2017-08-11 2017-12-29 维沃移动通信有限公司 A kind of display methods and mobile terminal
CN108446385A (en) * 2018-03-21 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN108989571A (en) * 2018-08-15 2018-12-11 浙江大学滨海产业技术研究院 A kind of adaptive font method of adjustment and device for mobile phone word read
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN109063580A (en) * 2018-07-09 2018-12-21 北京达佳互联信息技术有限公司 Face identification method, device, electronic equipment and storage medium
CN109144646A (en) * 2018-08-20 2019-01-04 广东小天才科技有限公司 Word size adjusting method for output interface of family education machine and family education machine
CN109190449A (en) * 2018-07-09 2019-01-11 北京达佳互联信息技术有限公司 Age recognition methods, device, electronic equipment and storage medium
CN110222597A (en) * 2019-05-21 2019-09-10 平安科技(深圳)有限公司 The method and device that screen is shown is adjusted based on micro- expression

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508606A (en) * 2011-11-10 2012-06-20 广东步步高电子工业有限公司 Method and system for subdividing belonged groups of users by face recognition and setting corresponding functions of mobile handsets
CN106778623A (en) * 2016-12-19 2017-05-31 珠海格力电器股份有限公司 Terminal screen control method and device and electronic equipment
CN107528972A (en) * 2017-08-11 2017-12-29 维沃移动通信有限公司 A kind of display methods and mobile terminal
CN108446385A (en) * 2018-03-21 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for generating information
CN109063580A (en) * 2018-07-09 2018-12-21 北京达佳互联信息技术有限公司 Face identification method, device, electronic equipment and storage medium
CN109190449A (en) * 2018-07-09 2019-01-11 北京达佳互联信息技术有限公司 Age recognition methods, device, electronic equipment and storage medium
CN109034078A (en) * 2018-08-01 2018-12-18 腾讯科技(深圳)有限公司 Training method, age recognition methods and the relevant device of age identification model
CN108989571A (en) * 2018-08-15 2018-12-11 浙江大学滨海产业技术研究院 A kind of adaptive font method of adjustment and device for mobile phone word read
CN109144646A (en) * 2018-08-20 2019-01-04 广东小天才科技有限公司 Word size adjusting method for output interface of family education machine and family education machine
CN110222597A (en) * 2019-05-21 2019-09-10 平安科技(深圳)有限公司 The method and device that screen is shown is adjusted based on micro- expression

Similar Documents

Publication Publication Date Title
US11551377B2 (en) Eye gaze tracking using neural networks
CN110110118B (en) Dressing recommendation method and device, storage medium and mobile terminal
CN106682632B (en) Method and device for processing face image
CN108898185A (en) Method and apparatus for generating image recognition model
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN108595628A (en) Method and apparatus for pushed information
WO2020244074A1 (en) Expression interaction method and apparatus, computer device, and readable storage medium
WO2021114936A1 (en) Information recommendation method and apparatus, electronic device and computer readable storage medium
WO2019232883A1 (en) Insurance product pushing method and device, computer device and storage medium
CN111488477A (en) Album processing method, apparatus, electronic device and storage medium
WO2023197648A1 (en) Screenshot processing method and apparatus, electronic device, and computer readable medium
CN111429338A (en) Method, apparatus, device and computer-readable storage medium for processing video
CN113610723A (en) Image processing method and related device
CN114242023A (en) Display screen brightness adjusting method, display screen brightness adjusting device and electronic equipment
CN112669416B (en) Customer service system, method, device, electronic equipment and storage medium
CN114063845A (en) Display method, display device and electronic equipment
CN112488650A (en) Conference atmosphere adjusting method, electronic equipment and related products
CN109949213B (en) Method and apparatus for generating image
CN111814840A (en) Method, system, equipment and medium for evaluating quality of face image
US20170171462A1 (en) Image Collection Method, Information Push Method and Electronic Device, and Mobile Phone
CN111459587A (en) Information display method, device, equipment and storage medium
CN113486730A (en) Intelligent reminding method based on face recognition and related device
CN113596597A (en) Game video acceleration method and device, computer equipment and storage medium
CN111079662A (en) Figure identification method and device, machine readable medium and equipment
CN110188713A (en) Method and apparatus for output information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230224

Address after: Room 507, Floor 5, Building 2, Yard 18, Haidian Suzhou Street, Haidian District, Beijing 100080

Applicant after: BEIJING KUXUN TECHNOLOGY Co.,Ltd.

Applicant after: BEIJING SANKUAI ONLINE TECHNOLOGY Co.,Ltd.

Address before: 100080 2106-030, 9 North Fourth Ring Road, Haidian District, Beijing.

Applicant before: BEIJING SANKUAI ONLINE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
WW01 Invention patent application withdrawn after publication

Application publication date: 20200728

WW01 Invention patent application withdrawn after publication