CN112733673B - Content display method and device, electronic equipment and readable storage medium - Google Patents

Content display method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112733673B
CN112733673B CN202011631713.XA CN202011631713A CN112733673B CN 112733673 B CN112733673 B CN 112733673B CN 202011631713 A CN202011631713 A CN 202011631713A CN 112733673 B CN112733673 B CN 112733673B
Authority
CN
China
Prior art keywords
target person
sitting posture
target
display
target terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011631713.XA
Other languages
Chinese (zh)
Other versions
CN112733673A (en
Inventor
朱林涛
郭方清
邓竹立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuba Co Ltd
Original Assignee
Wuba Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuba Co Ltd filed Critical Wuba Co Ltd
Priority to CN202011631713.XA priority Critical patent/CN112733673B/en
Publication of CN112733673A publication Critical patent/CN112733673A/en
Application granted granted Critical
Publication of CN112733673B publication Critical patent/CN112733673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a content display method and device, electronic equipment and a readable storage medium, and relates to the technical field of terminal display. The method comprises the following steps: acquiring face information and body information corresponding to a target person, wherein the target person is located in a preset area in front of a target terminal; determining the sight distance of the target person according to the face information; and determining the sitting position of the target person according to the body information; and controlling the target terminal to display preset contents according to the sight distance and the sitting posture. The technical problem that the user experience is poor due to the fact that the screen display of the terminal cannot be adjusted according to the eye requirements of the user in the related art is solved.

Description

Content display method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of terminal display technologies, and in particular, to a content display method, a device, an electronic apparatus, and a readable storage medium.
Background
In the related art, currently mainstream electronic devices, display functions include automatic or manual brightness adjustment, and background color, font size may be automatically adjusted by a user manually setting or the device, and also, for example, setting a night mode, a daytime mode, and the like.
In carrying out the present invention, the applicant has found that at least the following problems exist in the related art:
1) The degree of automation is low, and the operation is complex; at present, only automatic brightness adjustment is supported, and the eye protection requirement cannot be well met;
2) The user manually sets the background color, the font size, the font color and the mode, and the selectable options are not too many, so that the user's expectation is hardly met;
3) More favoring support of user-defined preference settings does not provide the user with good eye protection.
It can be seen that the related art cannot automatically adjust the screen display of the electronic device according to the health eye requirements of the user.
Aiming at the problems, no effective solution is proposed at present.
Disclosure of Invention
The embodiment of the invention provides a content display method, a device, electronic equipment and a readable storage medium, which are used for solving the technical problem that in the related art, the user experience is poor because the display of a terminal screen cannot be adjusted according to the health eye demand of a user.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a content display method, including: acquiring face information and body information corresponding to a target person, wherein the target person is located in a preset area in front of a target terminal; determining the sight distance of the target person according to the face information; and determining a sitting position of the target person according to the bust information; and controlling the target terminal to display preset contents according to the sight distance and the sitting posture.
Further, the face information includes a face image of the target person, the body information includes a body image of the target person, and the obtaining of the face information and the body information corresponding to the target person includes: collecting a character image corresponding to the target character through the target terminal; and acquiring a face image and a half-body image of the target person according to the person image.
Further, determining the line of sight of the target person from the face information, and determining the sitting position of the target person from the body information includes: inputting the face image into a pre-trained face recognition model to obtain the sight distance of the target person; and inputting the half body image into a pre-trained sitting posture recognition model to obtain the sitting posture of the target person.
Further, controlling the target terminal to display preset content according to the sight distance and the sitting posture comprises the following steps: judging whether the sight distance is larger than a preset distance threshold value or not; and if the sight distance is smaller than the preset distance threshold value, controlling the target terminal to display the preset content in a fuzzy manner.
Further, controlling the target terminal to display preset content according to the sight distance and the sitting posture comprises the following steps: acquiring illumination information corresponding to a target terminal; determining a healthy distance threshold value and a healthy sitting posture in the current illumination environment according to the illumination information; and controlling the target terminal to display the preset content in a fuzzy manner under the condition that the sight distance is smaller than the healthy distance threshold or the sitting posture is not matched with the healthy sitting posture.
In a second aspect, an embodiment of the present invention further provides a content display apparatus, including: the device comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring facial information and half body information corresponding to a target person, and the target person is positioned in a preset area in front of a target terminal; a determining unit configured to determine a line of sight of the target person based on the face information; and determining a sitting position of the target person according to the bust information; and the display unit is used for controlling the target terminal to display preset contents according to the illumination information, the sight distance and the sitting posture.
Further, the face information includes a face image of the target person, the body information includes a body image of the target person, wherein the acquisition unit includes: the image acquisition module is used for acquiring a character image corresponding to the target character through the target terminal; and the first acquisition module is used for acquiring the face image and the half-body image of the target person according to the person image.
Further, the determining unit includes: the first processing module is used for inputting the face image into a pre-trained face recognition model so as to obtain the sight distance of the target person; and the second processing module is used for inputting the half body image into a pre-trained sitting posture recognition model so as to obtain the sitting posture of the target person.
Further, the display unit includes: and the first display module is used for controlling the target terminal to display the preset content in a fuzzy manner if the sight distance is smaller than the preset distance threshold value.
Further, the display unit includes: the second acquisition module is used for acquiring illumination information corresponding to the target terminal; the determining module is used for determining a healthy distance threshold value and a healthy sitting posture in the current illumination environment according to the illumination information; and the display module is used for controlling the target terminal to display the preset content in a fuzzy manner under the condition that the sight distance is smaller than the healthy distance threshold value or the sitting posture is not matched with the healthy sitting posture.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the content display method as described in the previous first aspect.
In a fourth aspect, embodiments of the present invention additionally provide a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the content display method as described in the previous first aspect.
In the embodiment of the invention, facial information and body information corresponding to a target person are acquired, wherein the target person is positioned in a preset area in front of a target terminal; determining the sight distance of the target person according to the face information; and determining the sitting position of the target person according to the body information; and controlling the target terminal to display preset contents according to the sight distance and the sitting posture. The visual distance and sitting posture of the target person are determined by acquiring the facial information and the body information of the target person in the preset area of the target terminal, and the target terminal is controlled to display preset contents according to the visual distance and the sitting posture so as to meet the health eye demand of a user, thereby solving the technical problem that the user experience is poor because the screen display of the terminal cannot be adjusted according to the health eye demand of the user in the related technology.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of a content display method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a content display method according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a structure of a content display apparatus in an embodiment of the present invention;
fig. 4 is a schematic hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Before describing the content display method of the present embodiment, an application scenario of the content display method in the present embodiment is first described. Fig. 1 is a schematic diagram of a hardware application scenario of an alternative content display method in this embodiment. In this scene, the target terminal 10 and the target person 20 are included, wherein the target person 20 is located in a preset area 30 in front of the target terminal 10, and the target person 20 is located at a position capable of browsing preset contents displayed in the target terminal 10.
In a specific application scenario, acquiring face information and body information corresponding to a target person 20, wherein the target person 20 is located in a preset area 30 in front of a target terminal 10; determining the viewing distance of the target person 20 from the face information; and determining the sitting position of the target person 20 based on the bust information; the target terminal 10 is controlled to display preset contents according to the viewing distance and the sitting posture.
The visual distance and sitting posture of the target person are determined by acquiring the facial information and the body information of the target person in the preset area of the target terminal, and the target terminal is controlled to display preset contents according to the visual distance and the sitting posture so as to meet the health eye demand of a user, thereby solving the technical problem that the user experience is poor because the screen display of the terminal cannot be adjusted according to the health eye demand of the user in the related technology.
In an embodiment of the present invention, as shown in fig. 2, a content display method is provided, and the method specifically includes the following steps:
s202, acquiring face information and half body information corresponding to a target person, wherein the target person is located in a preset area in front of a target terminal;
in the present embodiment, the target terminals include, but are not limited to, terminals with image capturing functions such as mobile terminals, PCs, and microcomputers. The terminal system of the target terminal comprises, but is not limited to, a Windows system, an iOS system, a Linux system, an Android system and the like. In addition, the preset contents displayed on the target terminal include, but are not limited to, contents such as multimedia files including pictures, videos, sounds, and graphical user interfaces of applications.
The target person is located in a preset area in front of the target terminal, and it is to be noted that the preset area refers to preset content displayed in a display screen of the target terminal when the target person can browse in the preset area. The person located outside the preset area is not considered the target person. In this embodiment, the adjustment display mode is not performed for non-target persons outside the area. The range size of the preset data is determined according to the screen size of the display screen of the target terminal, and in this embodiment, setting may also be performed according to actual experience, which is not limited in any way.
Next, face information and body information corresponding to the target person are acquired by acquiring a person image of the target person, the face information including, but not limited to, information of a face image, a viewing angle, and the like of the target person; the half body information includes, but is not limited to, information of a working distance between the target person and the target terminal, a sitting posture angle, and the like.
S204, determining the sight distance of the target person according to the face information; and determining the sitting position of the target person according to the body information;
in a specific application scenario, the line of sight of the target person may be determined from the face information of the target person. Then, whether the eyes of the target person are open or not is determined based on the face information, and the sight line information of the target person, that is, the attention point of the target person, is determined based on the angle of view.
In addition, the body information includes, but is not limited to, information such as an upper body image or a partial torso image of the target person, a working distance of the target person, a sitting posture angle, and the like, and the sitting posture of the target person can be determined according to the upper body image or the partial torso image, so that the matching degree of the current sitting posture of the target person and the standard coordinates is determined according to the sitting posture of the target person.
S206, controlling the target terminal to display preset content according to the sight distance and the sitting posture.
In a specific application scene, the healthy sight distance corresponding to the target terminal and the standard sitting posture conforming to the health regulation are obtained, and then the target terminal is judged to be controlled to display preset contents according to the healthy sight distance and the standard sitting posture and the sight distance and the sitting posture of the target person. For example, if the sight distance of the target person is smaller than the healthy sight distance, the target terminal is controlled to display the preset content in a fuzzy manner; and/or, the preset content can be displayed in a fuzzy manner on the display screen of the target terminal under the condition that the sitting posture of the target person is not matched with the healthy sitting posture, and the prompting information is displayed so as to prompt the user to adjust the sight distance or the sitting posture.
In this embodiment, the healthy viewing distance may be set according to actual experience, and meanwhile, the healthy viewing distance may be determined according to illumination information of the current environment and screen brightness of the target terminal. The standard sitting posture may be determined according to the environment in which the target person is located, for example, the standard sitting posture is different in the sitting state and the lying state of the target person.
Further, the control target terminal displaying the preset content may be the control target terminal blurring display of the content currently being displayed, including but not limited to mosaic processing of the content currently being displayed, and displaying other images or videos. Meanwhile, prompt information can be displayed on the screen of the target terminal so as to prompt the target person to adjust the sight distance or sitting posture.
As an optional technical solution, in this embodiment, the illumination of the environment where the target terminal is located is continuously changed, for example, affected by sunlight, lamplight, etc., the light intensity and the light color are also changed, and by detecting the current light intensity and light color, the brightness of the mobile terminal screen, the background color of the font, and the font color can be adaptively adjusted, so as to protect the eyes of the target person, for example, when the ambient light is brighter, the brightness of the mobile terminal screen is adaptively reduced; in addition, the user information of the target person can be detected, the display content on the target terminal can be adaptively adjusted according to the detected face information and sitting posture condition of the target person, for example, the target terminal is too close to the mobile terminal, the light is too bright, the vision is damaged, the distance between the target person and the target terminal is detected according to the face information and the sitting posture condition, and if the distance is too close, the definition of the display content on the screen can be adaptively adjusted, so that the target person can be reminded to adjust the sitting posture, and the user can be helped to protect eyes and correct the vision.
It should be noted that, through this embodiment, face information and body information corresponding to a target person are obtained, where the target person is located in a preset area in front of the target terminal; determining the sight distance of the target person according to the face information; and determining the sitting position of the target person according to the body information; and controlling the target terminal to display preset contents according to the sight distance and the sitting posture. The visual distance and sitting posture of the target person are determined by acquiring the facial information and the body information of the target person in the preset area of the target terminal, and the target terminal is controlled to display preset contents according to the visual distance and the sitting posture so as to meet the health eye demand of a user, thereby solving the technical problem that the user experience is poor because the screen display of the terminal cannot be adjusted according to the health eye demand of the user in the related technology.
Optionally, in the present embodiment, the face information includes a face image of the target person, and the body information includes a body image of the target person, where the face information and the body information corresponding to the target person are acquired, including but not limited to: collecting a character image corresponding to a target character through a target terminal; and acquiring a face image and a half-body image of the target person according to the person image.
Specifically, in the present embodiment, the person image corresponding to the target person is acquired by the image acquisition section of the target terminal, for example, according to the front camera of the target terminal. After the character image corresponding to the target character is acquired, the character image is segmented according to the image recognition model which is completed through the preset training, and the face image and the half-body image of the target character are obtained. Wherein the self image includes but is not limited to the head and upper torso of the target person, the image recognition model in this embodiment includes but is not limited to a full convolution CNN network.
In the present embodiment, whether or not the eyes of the target person are open, the visual distance of the target person can be determined by the face image; the sitting position of the target person can be determined by the half body image.
Through the above embodiment, the target terminal acquires the person image corresponding to the target person, acquires the face image and the half-body image of the target person according to the person image, and determines the related information of the target person through the face image and the half-body image.
Alternatively, in the present embodiment, determining the viewing distance of the target person from the face information, and determining the sitting position of the target person from the body information includes, but is not limited to: inputting the face image into a pre-trained face recognition model to obtain the sight distance of a target person; and inputting the half body image into a pre-trained sitting posture recognition model to obtain the sitting posture of the target person.
Specifically, in the present embodiment, the face recognition model and the sitting gesture recognition model include, but are not limited to, models such as a support vector machine, a full convolutional neural network, and the like, and may be specifically designed according to actual experience, which is not limited in this embodiment.
In implementation of this embodiment, on the one hand, the face recognition model needs to be trained first. In some embodiments of the present invention, a training sample set is constructed according to a facial image corresponding to a target person, and each training sample in the training sample set includes: and (5) a face image. First, user face image data stored in a preset database is acquired. Typically, the user will generate corresponding image recognition records during facial image recognition, and the image recognition records stored in the preset database include, but are not limited to: facial images, line of sight, etc. Then, training samples are constructed from the user face image data. In some embodiments of the present invention, the training samples may be obtained by processing the facial image data. Each training sample includes: facial image, line of sight. In some embodiments of the invention, each training sample is represented as a tuple comprising < face image, line of sight >. Next, the face recognition model is trained based on the constructed training sample set. The face image is taken as a model input, the sight distance is taken as a model target, and a face recognition model is trained.
In the implementation of this embodiment, on the other hand, the sitting posture recognition model needs to be trained first. In some embodiments of the present invention, a training sample set is constructed according to a half-body image corresponding to a target person, and each training sample in the training sample set includes: a half body image. First, half-body image data stored in a preset database is acquired. Typically, the user will generate a corresponding sitting posture identification record when performing sitting posture identification or photographing, and the sitting posture identification record stored in the preset database includes, but is not limited to: half body images, sitting postures, etc. Then, a training sample is constructed from the user's bust image data. In some embodiments of the present invention, the training samples may be obtained by processing the half-body image data. Each training sample includes: half body image and sitting posture. In some embodiments of the invention, each training sample is represented as a binary set, including < half-body image, sitting >. Next, the sitting posture recognition model is trained based on the constructed training sample set. And taking the half body image as a model input, taking the sitting posture as a model target, and training a sitting posture recognition model.
Then, the face recognition model and the sitting posture recognition model are trained based on the training modes of the two aspects until the face recognition model and the sitting posture recognition model gradually converge.
The face image is input to the trained face recognition model to obtain face information of the target person, including but not limited to line-of-sight data of the target person. And inputting the half body image into the trained sitting posture recognition model to obtain the sitting posture data of the target person.
In a specific application scene, detecting the opening degree of eyes of a target person, such as whether the eyes are open, can be used as a condition for judging whether to enter an energy-saving protection or privacy protection state; the visual distance between eyes of the target person and the target terminal can be used as the basis of healthy visual distance protection logic; and detecting the visual angle of the target person, and taking the visual angle as the basis of the healthy visual distance protection logic. The working distance obtained by the sitting posture model detection, namely the working distance between the target person and the mobile terminal, can be used as the basis of healthy vision distance and sitting posture correction logic; detecting sitting posture angle, which is used as the basis of sitting posture correction logic; the matching degree detection of the whole model and the standard sitting posture model can be used as the basis of sitting posture correction logic.
Through the above embodiment, the face image is input into the face recognition model trained in advance to obtain the face information of the target person; the half body image is input into a pre-trained sitting posture recognition model to obtain the sitting posture information of the target person, and the accuracy of the obtained face information and the obtained sitting posture information of the target person is improved.
Optionally, in this embodiment, the target terminal is controlled to display preset content according to the viewing distance and sitting posture, including but not limited to: judging whether the sight distance is larger than a preset distance threshold value or not; and if the sight distance is smaller than the preset distance threshold value, controlling the target terminal to display preset contents in a fuzzy manner.
Specifically, in this embodiment, face images of the target person are acquired at intervals of a preset time, and the viewing distance corresponding to the target person, that is, the distance between the eyes of the target person and the target terminal screen is determined by the face recognition model. A preset distance threshold is set for protecting the user's vision as a healthy viewing distance range. When the sight distance is smaller than a preset distance threshold value, the current eye habit of the target person is considered unhealthy, the definition displayed by the target terminal is adjusted, the content is blurred, and prompt information for prompting the user to adjust the sight distance is displayed in a screen of the target terminal; and after the recovery of the healthy vision distance is detected, controlling the target terminal to recover the definition under the condition that the vision distance is larger than a preset distance threshold value.
It should be noted that, in this embodiment, the control target terminal blurs to display preset content, which includes but is not limited to adjusting the definition of the target terminal, mosaic processing, shielding display of preset pictures, and displaying of preset prompt information. By the embodiment, good eye habit can be established for the user, and the purpose of healthy eye use is further achieved.
Optionally, in this embodiment, the target terminal is controlled to display preset content according to the viewing distance and sitting posture, including but not limited to: acquiring illumination information corresponding to a target terminal; determining a health distance threshold value and a health sitting posture in the current illumination environment according to the illumination information; and controlling the target terminal to display preset contents in a fuzzy manner under the condition that the vision distance is smaller than the healthy distance threshold value or the sitting posture is not matched with the healthy sitting posture.
In a specific application scene, the target terminal adjusts the corresponding screen brightness according to the change of the ambient light, and the healthy viewing distance corresponding to the target person changes along with different screen brightness. Therefore, in this embodiment, environmental illumination information of an environment where the target terminal is located is obtained to determine a healthy distance threshold corresponding to the healthy viewing distance, the target terminal is controlled to display preset content in a fuzzy manner when the viewing distance is smaller than the healthy distance threshold, and prompt information for prompting a user to adjust the viewing distance is displayed in a screen of the target terminal.
In an alternative technical scheme, an environment image and an environment light of an environment where the target terminal is located are acquired through the target terminal, and illumination information of the environment where the target terminal is located is determined based on the environment image and the environment light, wherein the illumination information comprises but is not limited to light color and light intensity. In one example, a photoelectric sensor of a target terminal is adopted to acquire an environment image of the target terminal, wherein the environment image comprises a face image; uniformly taking points in areas except the face image, and calculating gray values of the taken points to detect the color of ambient light; the photoelectric effect principle of the photoelectric sensor is utilized to detect the light intensity of the environment where the target terminal is located, and the method for detecting illumination information through the photoelectric sensor has the advantages of high precision, quick response, non-contact and the like.
In addition, under the condition that the sitting postures of the target persons are not matched with the healthy sitting postures, specifically, the corresponding sitting postures are determined through the half body images of the target persons, the sitting postures of the target persons are matched with the preset healthy sitting postures, under the condition that the sitting postures of the target persons are not matched with the healthy sitting postures, the target terminal is controlled to display preset contents in a fuzzy mode, and prompt information for prompting a user to adjust the sitting postures is displayed in a screen of the target terminal.
According to the content display method provided by the embodiment, facial information and half body information corresponding to a target person are obtained, wherein the target person is located in a preset area in front of a target terminal; determining the sight distance of the target person according to the face information; and determining the sitting position of the target person according to the body information; and controlling the target terminal to display preset contents according to the sight distance and the sitting posture. The visual distance and sitting posture of the target person are determined by acquiring the facial information and the body information of the target person in the preset area of the target terminal, and the target terminal is controlled to display preset contents according to the visual distance and the sitting posture so as to meet the health eye demand of a user, thereby solving the technical problem that the user experience is poor because the screen display of the terminal cannot be adjusted according to the health eye demand of the user in the related technology.
Example two
A content display apparatus provided by an embodiment of the present invention will be described in detail.
Referring to fig. 3, a schematic diagram of a content display apparatus according to an embodiment of the present invention is shown.
The content display apparatus of the embodiment of the invention includes: an acquisition unit 30, a determination unit 32, a display unit 34.
The functions of the modules and the interaction relationship between the modules are described in detail below.
An acquiring unit 30, configured to acquire face information and body information corresponding to a target person, where the target person is located in a preset area in front of a target terminal;
a determining unit 32 for determining a viewing distance of the target person based on the face information; and determining a sitting position of the target person according to the bust information;
and the display unit 34 is used for controlling the target terminal to display preset content according to the illumination information, the sight distance and the sitting posture.
Alternatively, in the present embodiment, the face information includes a face image of the target person, the half-body information includes a half-body image of the target person, wherein,
the acquisition unit 30 includes:
the image acquisition module is used for acquiring a character image corresponding to the target character through the target terminal;
And the first acquisition module is used for acquiring the face image and the half-body image of the target person according to the person image.
Alternatively, in the present embodiment, the determining unit 32 includes:
the first processing module is used for inputting the face image into a pre-trained face recognition model so as to obtain the sight distance of the target person;
and the second processing module is used for inputting the half body image into a pre-trained sitting posture recognition model so as to obtain the sitting posture of the target person.
Alternatively, in the present embodiment, the display unit 34 includes:
and the first display module is used for controlling the target terminal to display the preset content in a fuzzy manner if the sight distance is smaller than the preset distance threshold value.
Alternatively, in the present embodiment, the display unit 34 includes:
the second acquisition module is used for acquiring illumination information corresponding to the target terminal;
the determining module is used for determining a healthy distance threshold value and a healthy sitting posture in the current illumination environment according to the illumination information;
and the display module is used for controlling the target terminal to display the preset content in a fuzzy manner under the condition that the sight distance is smaller than the healthy distance threshold value or the sitting posture is not matched with the healthy sitting posture.
The content display device provided by the embodiment of the invention acquires the facial information and the half body information corresponding to the target person, wherein the target person is positioned in a preset area in front of the target terminal; determining the sight distance of the target person according to the face information; and determining the sitting position of the target person according to the body information; and controlling the target terminal to display preset contents according to the sight distance and the sitting posture. The visual distance and sitting posture of the target person are determined by acquiring the facial information and the body information of the target person in the preset area of the target terminal, and the target terminal is controlled to display preset contents according to the visual distance and the sitting posture so as to meet the health eye demand of a user, thereby solving the technical problem that the user experience is poor because the screen display of the terminal cannot be adjusted according to the health eye demand of the user in the related technology.
Example III
Fig. 4 is a schematic hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 400 includes, but is not limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, processor 410, and power source 411. Those skilled in the art will appreciate that the electronic device structure shown in fig. 4 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 401 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, receiving downlink data from a base station and then processing the received downlink data by the processor 410; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 401 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 402, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 403 may convert audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as sound. Also, the audio output unit 403 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 400. The audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
The input unit 404 is used to receive an audio or video signal. The input unit 404 may include a graphics processor (Graphics Processing Unit, GPU) 4041 and a microphone 4042, the graphics processor 4041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 406. The image frames processed by the graphics processor 4041 may be stored in memory 409 (or other storage medium) or transmitted via the radio frequency unit 401 or the network module 402. The microphone 4042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 401 in the case of a telephone call mode.
The electronic device 400 also includes at least one sensor 405, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 4061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 4061 and/or the backlight when the electronic device 400 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 405 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 406 is used to display information input by a user or information provided to the user. The display unit 406 may include a display panel 4061, and the display panel 4061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 407 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 407 includes a touch panel 4071 and other input devices 4072. The touch panel 4071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 4071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 4071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 410, and receives and executes commands sent from the processor 410. In addition, the touch panel 4071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 407 may include other input devices 4072 in addition to the touch panel 4071. In particular, other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 4071 may be overlaid on the display panel 4061, and when the touch panel 4071 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 410 to determine the type of touch event, and then the processor 410 provides a corresponding visual output on the display panel 4061 according to the type of touch event. Although in fig. 4, the touch panel 4071 and the display panel 4061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 4041 and the display panel 4061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 408 is an interface to which an external device is connected to the electronic apparatus 400. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 408 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 400 or may be used to transmit data between the electronic apparatus 400 and an external device.
Memory 409 may be used to store software programs as well as various data. The memory 409 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 409 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 410 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 409 and invoking data stored in the memory 409, thereby performing overall monitoring of the electronic device. Processor 410 may include one or more processing units; preferably, the processor 410 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 410.
The electronic device 400 may also include a power supply 411 (e.g., a battery) for powering the various components, and preferably the power supply 411 may be logically connected to the processor 410 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 400 includes some functional modules, which are not shown, and are not described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including: the processor 410, the memory 409, and a computer program stored in the memory 409 and capable of running on the processor 410, where the computer program when executed by the processor 410 implements the respective processes of the foregoing embodiments of the content display method, and achieves the same technical effects, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the above embodiment of the content display method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A content display method, characterized by comprising:
acquiring face information and body information corresponding to a target person, wherein the target person is located in a preset area in front of a target terminal, and the face information comprises a face image of the target person; the body information includes a body image of the target person;
determining the sight distance of the target person according to the face information; inputting the half-body image into a pre-trained sitting posture recognition model to obtain the sitting posture of the target person; determining whether the eyes of the target person are open according to the face information to be used as a basis for judging whether to enter an energy-saving protection or privacy protection state;
controlling the target terminal to display preset contents according to the sight distance and the sitting posture;
the determining the sight distance of the target person according to the face information includes:
inputting the face image into a pre-trained face recognition model to obtain the sight distance of the target person;
and controlling the target terminal to display preset contents according to the sight distance and the sitting posture, wherein the method comprises the following steps:
obtaining a healthy sight distance corresponding to a target terminal and a standard sitting posture conforming to health regulations, and then judging whether to control the target terminal to display preset contents according to the healthy sight distance and the standard sitting posture and the sight distance and the sitting posture of the target person;
The standard sitting postures are determined according to the environment where the target person is located, and the standard sitting postures comprise a standard sitting posture where the target person is in an sitting end state and a standard sitting posture where the target person is in a lying state.
2. The method of claim 1, wherein acquiring face information and body information corresponding to the target person comprises:
collecting a character image corresponding to the target character through the target terminal;
and acquiring a face image and a half-body image of the target person according to the person image.
3. The method of claim 1, wherein controlling the target terminal to display preset content according to the line of sight and the sitting posture comprises:
judging whether the sight distance is larger than a preset distance threshold value or not;
and if the sight distance is smaller than the preset distance threshold value, controlling the target terminal to display the preset content in a fuzzy manner.
4. The method of claim 1, wherein controlling the target terminal to display preset content according to the line of sight and the sitting posture comprises:
acquiring illumination information corresponding to a target terminal;
determining a healthy distance threshold value and a healthy sitting posture in the current illumination environment according to the illumination information;
And controlling the target terminal to display the preset content in a fuzzy manner under the condition that the sight distance is smaller than the healthy distance threshold or the sitting posture is not matched with the healthy sitting posture.
5. A content display apparatus, characterized by comprising:
an obtaining unit, configured to obtain face information and half body information corresponding to a target person, where the target person is located in a preset area in front of a target terminal, and the face information includes a face image of the target person; the body information includes a body image of the target person;
a determining unit configured to determine a line of sight of the target person based on the face information; the half body image is input into a pre-trained sitting posture recognition model so as to obtain the sitting posture of the target person;
the display unit is used for controlling the target terminal to display preset contents according to the illumination information, the sight distance and the sitting posture;
the determination unit includes:
the first processing module is used for inputting the face image into a pre-trained face recognition model so as to obtain the sight distance of the target person;
the display unit includes:
obtaining a healthy sight distance corresponding to a target terminal and a standard sitting posture conforming to health regulations, and then judging whether to control the target terminal to display preset contents according to the healthy sight distance and the standard sitting posture and the sight distance and the sitting posture of the target person;
The standard sitting postures are determined according to the environment where the target person is located, and the standard sitting postures comprise a standard sitting posture where the target person is in an sitting state and a standard sitting posture where the target person is in a lying state;
the device is also for: and determining whether the eyes of the target person are open according to the facial information to be used as a basis for judging whether to enter an energy-saving protection or privacy protection state.
6. The apparatus of claim 5, wherein the acquisition unit comprises:
the image acquisition module is used for acquiring a character image corresponding to the target character through the target terminal;
and the first acquisition module is used for acquiring the face image and the half-body image of the target person according to the person image.
7. The apparatus of claim 5, wherein the display unit comprises:
and the first display module is used for controlling the target terminal to display the preset content in a fuzzy manner if the sight distance is smaller than a preset distance threshold value.
8. The apparatus of claim 5, wherein the display unit comprises:
the second acquisition module is used for acquiring illumination information corresponding to the target terminal;
The determining module is used for determining a healthy distance threshold value and a healthy sitting posture in the current illumination environment according to the illumination information;
and the display module is used for controlling the target terminal to display the preset content in a fuzzy manner under the condition that the sight distance is smaller than the healthy distance threshold value or the sitting posture is not matched with the healthy sitting posture.
9. An electronic device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the content display method according to any one of claims 1 to 4.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the content display method according to any one of claims 1 to 4.
CN202011631713.XA 2020-12-30 2020-12-30 Content display method and device, electronic equipment and readable storage medium Active CN112733673B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011631713.XA CN112733673B (en) 2020-12-30 2020-12-30 Content display method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011631713.XA CN112733673B (en) 2020-12-30 2020-12-30 Content display method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112733673A CN112733673A (en) 2021-04-30
CN112733673B true CN112733673B (en) 2024-01-30

Family

ID=75608347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011631713.XA Active CN112733673B (en) 2020-12-30 2020-12-30 Content display method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112733673B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113824832B (en) * 2021-09-22 2023-05-26 维沃移动通信有限公司 Prompting method, prompting device, electronic equipment and storage medium
CN115798401B (en) * 2023-02-09 2023-04-11 深圳市宏普欣电子科技有限公司 Intelligent mini-LED regulation and control method based on Internet of things

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268023A (en) * 2013-06-03 2013-08-28 戴明华 Myopia prevention and treatment glasses for controlling head position and sitting posture
CN104052871A (en) * 2014-05-27 2014-09-17 上海电力学院 Eye protecting device and method for mobile terminal
CN108419128A (en) * 2018-03-12 2018-08-17 深圳市赛亿科技开发有限公司 The method and device of personage's pose adjustment
CN108921125A (en) * 2018-07-18 2018-11-30 广东小天才科技有限公司 A kind of sitting posture detecting method and wearable device
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN210038369U (en) * 2019-04-23 2020-02-07 王流俊 Myopia prevention glasses
CN111265220A (en) * 2020-01-21 2020-06-12 王力安防科技股份有限公司 Myopia early warning method, device and equipment
CN111862555A (en) * 2019-04-30 2020-10-30 北京安云世纪科技有限公司 Sitting posture correction control method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268023A (en) * 2013-06-03 2013-08-28 戴明华 Myopia prevention and treatment glasses for controlling head position and sitting posture
CN104052871A (en) * 2014-05-27 2014-09-17 上海电力学院 Eye protecting device and method for mobile terminal
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN108419128A (en) * 2018-03-12 2018-08-17 深圳市赛亿科技开发有限公司 The method and device of personage's pose adjustment
CN108921125A (en) * 2018-07-18 2018-11-30 广东小天才科技有限公司 A kind of sitting posture detecting method and wearable device
CN210038369U (en) * 2019-04-23 2020-02-07 王流俊 Myopia prevention glasses
CN111862555A (en) * 2019-04-30 2020-10-30 北京安云世纪科技有限公司 Sitting posture correction control method and device, computer equipment and storage medium
CN111265220A (en) * 2020-01-21 2020-06-12 王力安防科技股份有限公司 Myopia early warning method, device and equipment

Also Published As

Publication number Publication date
CN112733673A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN108491775B (en) Image correction method and mobile terminal
CN110969981B (en) Screen display parameter adjusting method and electronic equipment
CN109381165B (en) Skin detection method and mobile terminal
CN108989672B (en) Shooting method and mobile terminal
CN110970003A (en) Screen brightness adjusting method and device, electronic equipment and storage medium
CN108683850B (en) Shooting prompting method and mobile terminal
CN111031253B (en) Shooting method and electronic equipment
CN107730460B (en) Image processing method and mobile terminal
CN108762877B (en) Control method of mobile terminal interface and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN109819166B (en) Image processing method and electronic equipment
CN112733673B (en) Content display method and device, electronic equipment and readable storage medium
CN108174110B (en) Photographing method and flexible screen terminal
CN111008929B (en) Image correction method and electronic equipment
CN111240567B (en) Display screen angle adjusting method and electronic equipment
CN110636225B (en) Photographing method and electronic equipment
CN109639981B (en) Image shooting method and mobile terminal
CN109727212B (en) Image processing method and mobile terminal
CN110519443B (en) Screen lightening method and mobile terminal
CN108259756B (en) Image shooting method and mobile terminal
CN107798662B (en) Image processing method and mobile terminal
CN107729100B (en) Interface display control method and mobile terminal
CN110443752B (en) Image processing method and mobile terminal
CN109819331B (en) Video call method, device and mobile terminal
CN111402271A (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant