CN117132428A - Eye protection method, device, equipment and storage medium - Google Patents

Eye protection method, device, equipment and storage medium Download PDF

Info

Publication number
CN117132428A
CN117132428A CN202210557786.1A CN202210557786A CN117132428A CN 117132428 A CN117132428 A CN 117132428A CN 202210557786 A CN202210557786 A CN 202210557786A CN 117132428 A CN117132428 A CN 117132428A
Authority
CN
China
Prior art keywords
scene
user
distance
learning
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210557786.1A
Other languages
Chinese (zh)
Inventor
王嘉浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202210557786.1A priority Critical patent/CN117132428A/en
Priority to PCT/CN2023/093780 priority patent/WO2023221884A1/en
Publication of CN117132428A publication Critical patent/CN117132428A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an eye protection method, device, equipment and storage medium, wherein the method comprises the following steps: determining a learning scene, wherein the learning scene is used for learning an application displayed by a user facing the intelligent learning equipment in a place; synchronously collecting scene data related to eyes of a user from a place, intelligent learning equipment and the user; correcting scene data under the constraint of a learning scene; if the correction is completed, making eye protection measures suitable for the learning scene according to the scene data; eye protection measures are performed in the intelligent learning device to protect the eyes of the user in the learning scene. According to the embodiment, comprehensive scene data are detected aiming at three main elements in a learning scene, the scene data are corrected, deviation among the scene data can be eliminated, the scene data are unified in the same learning scene, eye protection requirements under the focus learning scene are met more pertinently, the eye protection effect is improved, and the eye health of a user is practically protected.

Description

Eye protection method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer processing, and in particular, to an eye protection method, apparatus, device, and storage medium.
Background
With the popularization of electronic devices such as tablet computers and learning machines, users mostly use the electronic devices to assist learning, and the learning efficiency of the users is improved.
The user can influence the eyesight of the electronic equipment after using the electronic equipment for a long time, so that eye fatigue is easy to cause, and myopia is even caused.
Currently, some electronic devices provide eye protection functions, which are mostly set according to a single condition, and the eye protection effect is poor.
Disclosure of Invention
The invention provides an eye protection method, an eye protection device, eye protection equipment and a storage medium, which are used for solving the problem of improving the eye protection effect.
According to an aspect of the present invention, there is provided an eye protection method applied to an intelligent learning device, the method including:
determining a learning scene, wherein the learning scene learns for an application displayed by a user facing the intelligent learning equipment in a place;
synchronously collecting scene data related to eyes of the user from the place, the intelligent learning equipment and the user;
correcting the scene data under the constraint of the learning scene;
if the correction is completed, making eye protection measures applicable to the learning scene according to the scene data;
The eye protection measures are performed in the intelligent learning device to protect the eyes of the user in the learning scene.
According to another aspect of the present invention, there is provided an eye protection device applied to an intelligent learning apparatus, the device comprising:
the learning scene determining module is used for determining a learning scene, wherein the learning scene is used for learning for an application displayed by the intelligent learning equipment in a place for a user;
the scene data acquisition module is used for synchronously acquiring scene data related to eyes of the user to the place, the intelligent learning equipment and the user;
a scene data correction module, configured to correct the scene data under the constraint of the learning scene;
the eye protection measure generating module is used for formulating eye protection measures applicable to the learning scene according to the scene data if the correction is completed;
and the eye protection measure execution module is used for executing the protection measure in the intelligent learning equipment so as to protect eyes of the user in the learning scene.
According to another aspect of the present invention, there is provided an intelligent learning apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the eye protection method of any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing a computer program for causing a processor to implement the eye protection method according to any one of the embodiments of the present invention when executed.
According to another aspect of the present invention, there is provided a computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the eye protection method according to any of the embodiments of the present invention.
In the embodiment, a learning scene is determined, wherein the learning scene learns for an application displayed by a user facing the intelligent learning equipment in a place; synchronously collecting scene data related to eyes of a user from a place, intelligent learning equipment and the user; correcting scene data under the constraint of a learning scene; if the correction is completed, making eye protection measures suitable for the learning scene according to the scene data; eye protection measures are performed in the intelligent learning device to protect the eyes of the user in the learning scene. According to the embodiment, comprehensive scene data are detected aiming at three main elements in a learning scene, the scene data are corrected, deviation among the scene data can be eliminated, the scene data are unified in the same learning scene, eye protection requirements under the focus learning scene are met more pertinently, the eye protection effect is improved, and the eye health of a user is practically protected.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an eye protection method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of an intelligent learning device according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a learning scenario provided according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of an eye protection device according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an intelligent learning device implementing an eye protection method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical proposal of the invention obtains, stores, uses, processes the data and the like all accord with the relevant regulations of national laws and regulations
Example 1
Fig. 1 is a flowchart of an eye protection method according to an embodiment of the present invention, where the embodiment is applicable to a case where eye protection measures are implemented based on an entire scene, the method may be implemented by an eye protection device, and the eye protection device may be implemented in a form of hardware and/or software, and the eye protection device may be configured in an intelligent learning device.
In general, the intelligent learning device may be a device customized for learning, also called a learning machine (e.g., intelligent learning machine, child learning machine, net class learning machine), home teaching machine, student tablet, child learning tablet, etc.
The customization includes hardware customization and/or software customization.
As shown in fig. 2, where the body of the intelligent learning device is a screen 201, the customization of hardware may include, but is not limited to: set up support 202 at intelligent learning device's back, this support 202 can support intelligent learning device, the convenience is with intelligent learning device with different angles slope, put on the desktop, set up high definition's camera 203 (also known as main camera) at intelligent learning device's top, this camera 203 can be for being independent of outside the main part (i.e. not embedded in the inside of main part), rotatable, conveniently provide support for various services, set up jumbo size speaker 204 in intelligent learning device's bottom, improve the audio, set up multiple physics button (such as volume key 205, on-off key 206) in intelligent learning device's bottom, convenience user's operation, etc.
Further, the customization of the software may include, but is not limited to: subject resources, drawing and accompanying reading, oral accounting and correction, exercise books, english word checking, oral English evaluation, famous reading, parental management and control, eye protection service, net lessons and the like.
Of course, the intelligent learning device may also be a non-customized device, such as a mobile phone, a tablet computer, a personal computer, a notebook computer, etc., and software for providing an eye protection service is installed on an operating system of these devices, which is not limited in this embodiment.
As shown in fig. 1, the method includes:
step 101, determining a learning scene.
In practical applications, the operating system of the intelligent learning device includes Android (Android) and its customized system, iOS, windows, harmonyOS (hong Meng), etc., various applications for assisting learning may be pre-installed in the operating system, and the user may install other applications for assisting learning according to his own learning requirement.
When the applications are started, a User Interface (UI) of the applications may be displayed on a screen of the intelligent learning device, where the screen may be a non-touch screen, or may be a touch screen such as a capacitive screen, a resistive screen, or the like.
As shown in fig. 3, in a living room, a bedroom, etc., a user 301 places an intelligent learning device 303 on a surface (i.e., a desktop 302) of an object such as a desk, a dining table, etc., and learns to face a screen, and in some cases, the user places paper learning materials (such as books, test papers, etc.) 304 in front of the intelligent learning device 303 and learns together with an application.
The user's own study is usually self-study, i.e. the user learns by himself or herself without being on duty, and since the user may be different identities such as students, incumbent persons, etc., the user may learn for education of various school segments, may learn for professional education, etc., which is not limited in this embodiment.
In this embodiment, the intelligent learning device provides the eye protection service, and the user may set configuration information of the eye protection service, for example, whether to turn on, a service period, a color of a layer imitating paper, and the like, on a page such as a configuration interface of an operating system.
The eye-protection service may use various data, the data may relate to privacy information of the user, and the privacy information of the user may also relate to the process of detecting the data, so before the eye-protection service is used for the first time, the intelligent learning device may inform the user that the eye-protection service may use various data in a limited manner through a service protocol, a risk prompt and the like, and the intelligent learning device is allowed and authorized to use the data under the condition that the user has knowledge of the purpose, the risk and the like of the various data.
When a user starts an application for assisting in learning, the intelligent learning device detects whether the eye protection service is started, and if the eye protection service is started, the current learning scene can be confirmed, wherein the learning scene is that the user learns in a place facing the application displayed by the intelligent learning device.
Step 102, synchronously collecting scene data related to eyes of a user to a place and an intelligent learning device and the user.
The three elements in the learning scene are respectively a place, an intelligent learning device and a user, the three elements are mutually unified and mutually influenced and are not isolated, the screen of the intelligent learning device displays a user interface of an application, the user sits or stands in front of the screen of the intelligent learning device, eyes of the user browse the screen of the intelligent learning device through the environment in the place, and the three elements of the place, the intelligent learning device and the user are mutually restricted and all influence the eyes of the user.
In a learning scene, the place, the intelligent learning device and the user are synchronously and continuously perceived, so that data related to eyes of the user are collected and recorded as scene data, wherein the scene data are features related to the eyes of the user in different dimensions in the learning scene.
Wherein at least one scene data relating to the eyes of the user may be detected for the venue, at least one scene data relating to the eyes of the user may be detected for the intelligent learning device, and at least one scene data relating to the eyes of the user may be detected for the user.
By synchronization, it is meant that scene data related to the eyes of the user is collected from the venue, the intelligent learning device, and the user at the same time in time, each frame of scene data corresponding to the venue, each frame of scene data corresponding to the intelligent learning device, and each frame of scene data corresponding to the user are not necessarily identical in time stamp, but the difference between time stamps is small (the difference between time stamps is smaller than a threshold value), and the difference can be ignored.
In one embodiment of the present invention, step 102 may include the steps of:
step 1021, create a buffer for the sensor in the intelligent learning device.
The intelligent learning device is provided with a plurality of sensors, so that the state of the intelligent learning device can be sensed, the environment around the intelligent learning device can be sensed, and in the embodiment, the sensors can be synchronously called to sense the places, the intelligent learning device and the user, so that scene data related to eyes of the user are collected.
In a specific implementation, as shown in FIG. 2, the sensors include, but are not limited to, the following:
1. attitude sensor 207
The attitude sensor 207 may detect an attitude of an object using angular velocity, acceleration, or the like, and may include an IMU (Inertial Measurement Unit ), an acceleration sensor, a gravity sensor, or the like.
In the present embodiment, the posture sensor 207 may be provided inside the smart learning device for detecting the posture of the smart learning device.
2. Distance sensor 208
The distance sensor 208 may detect a distance to an object using Time of Flight (TOF), and may include an ultrasonic ranging sensor, a laser ranging sensor, an infrared ranging sensor, and so on.
In the present embodiment, the orientation of the distance sensor 208 is the same as the orientation of the screen of the smart learning device, and the distance sensor 208 is illustratively disposed above the screen of the smart learning device for detecting the distance between the smart learning device and the user.
3. Light sensor 209
The light sensor 209 is mainly composed of a photosensitive element, and can sense the condition of surrounding light.
In this embodiment, the orientation of the light sensor 209 is the same as the orientation of the screen of the intelligent learning device, and the light sensor 209 is exemplarily disposed above the screen of the intelligent learning device for detecting the light in the place where the intelligent learning device is located.
4. Camera 210
The camera 210 may be the main camera 203 of the intelligent learning device, or may be a sub-camera independent of the main camera, where the pixels are lower than those of the main camera 203, and are not rotatable, such as a tele camera, a wide camera, and the like.
When the camera 210 is a sub-camera, the camera 210 is oriented in the same direction as the screen of the intelligent learning device, and the camera 210 is illustratively disposed above the screen of the intelligent learning device for capturing image data for the user.
Further, the above-mentioned sensor may be a stand-alone sensor or a combined sensor, for example, a camera may be integrated with a light sensor, a distance sensor may be integrated with a TOF camera, i.e. a depth-sensing camera, which detects the distance of the object to be photographed and records depth information, etc., which is not limited in this embodiment.
Of course, the above-described sensor is merely an example, and other sensors may be provided according to actual situations when the present embodiment is implemented, which is not limited thereto. In addition, in addition to the above-mentioned sensors, those skilled in the art may use other sensors according to actual needs, which are not limited in this embodiment.
In this embodiment, a buffer area may be created for each sensor in the intelligent learning device, and the replacement of the buffer area may be dynamically configured by the attribute of the operating system, which defaults to an empirical value through experiments such as a stabilization test.
Illustratively, the sensors in the intelligent learning device include a gesture sensor, a distance sensor, a light sensor, and a camera.
Then, in this example, a buffer may be created for the gesture sensor, as a first buffer, a buffer for the distance sensor, as a second buffer, a buffer for the light sensor, as a third buffer, a buffer for the camera, as a fourth buffer.
When the sensors are independent sensors, the buffers are independent buffers, and when the sensors are combined sensors, the buffers are combined buffers.
Step 1022, synchronously invoking the sensor to collect scene data related to eyes of the user from the venue, the intelligent learning device and the user.
In a specific implementation, the APIs (Application Programming Interface, application program interfaces) provided by the respective sensors may be used to synchronously invoke the respective sensors to collect scene data related to the eyes of the user to the venue, the intelligent learning device, and the user.
For example, the gesture sensor may be continuously invoked using an API provided by the gesture sensor to detect the gesture of the intelligent learning device as the scene data.
Typically, the attitude may include pitch angle, yaw angle, roll angle, etc.
As shown in fig. 3, in the learning scenario, the user generally places the intelligent learning device 303 on a flat desktop 302 such as a desk or a dining table, and the intelligent learning device 303 does not move at will after being placed, and its posture is relatively stable, so an included angle θ between a plane where a screen of the intelligent learning device 303 is located and a horizontal plane can be used as the posture of the intelligent learning device 303.
The distance sensor may be continuously invoked using an API provided by the distance sensor to detect a first distance between the user and the intelligent learning device as the scene data.
For the distance sensor, whether a user exists is generally not actively detected, the user is tracked, the distance between the user and the intelligent learning device is detected, the distance between the obstacle located in front of the distance sensor and the intelligent learning device is detected, and in a learning scene, the user is located in front of the screen of the intelligent learning device by default, namely, the default obstacle is the user, so the distance which can be detected by default is the distance between the user and the intelligent learning device.
In general, main screens such as CRT (Cathode Ray Tube), LCD (Liquid Crystal Display ), LED (light emitting diode, light emitting diode), OLED (Organic Electroluminescence Display, organic light emitting semiconductor) and the like all have different degrees of screen flashing, and when eyes of a user feel flickering with light and dark, the pupils can be adjusted adaptively with light and dark, so frequent flickering easily causes eye fatigue.
A screen such as LCD, LED, OLED may provide dimming modes such as PWM (pulse width modulation ) dimming and non-PWM dimming, where PWM dimming is to control the brightness from time to time, for example, a full bright state at a percentage of brightness, and a fifty percent of brightness is to be on and off half the time, so that many people sensitive to light may have phenomena such as eye-drop and fatigue. For non-PWM dimming, DC dimming is to change the brightness of a screen by increasing or decreasing the power of a panel circuit of the screen, but a problem of inaccurate color at low brightness may occur.
Even if the screen provides a dimming mode, the phenomenon of screen flashing can be slowed down, but the screen flashing cannot be eradicated, and the light emitted by the screen contains blue light with high-energy shortwaves, according to related researches, the blue light can increase the toxin amount of a macula area (the most concentrated part of the central visual cells of retina) in eyes and cause visual injury, so that a first distance between a user and intelligent learning equipment (especially the screen) is important for preventing eyes from being injured.
The API provided by the light sensor may be used to continuously invoke the light sensor to detect a first intensity of light and a first color temperature of light within the venue as scene data.
Furthermore, the place may be an open place, the light inside the place contains the light of sunlight, the place may also be a closed place, the light inside the place contains the light of lamplight, and the first brightness and the first color temperature of the place are relatively large for different sunlight and/or light of lamplight, so that the influence on eyes of a user is relatively obvious.
The camera may be continuously called to collect video data as scene data to the venue using an API provided by the camera.
For a camera, whether a user exists is generally not actively detected, the user is tracked and video data is collected, and the environment of a visual range is collected.
Of course, when other sensors are selected according to the actual situation, other scene data may be acquired during implementation of the embodiment, which is not limited in this embodiment.
Step 1023, writing the scene data into the buffer areas respectively.
Each sensor continuously generates scene data that is written in a buffer created for the sensor in the chronological order of generation so that serialized scene data is formed.
Exemplary sensors provided in the intelligent learning device include an attitude sensor, a distance sensor, a light sensor, and a camera.
The gesture sensor continuously detects the gesture of the intelligent learning device, and then the gesture is written into a first buffer created for the gesture sensor.
The distance sensor continues to detect a first distance between the user and the intelligent learning device, and then the first distance is written into a second buffer created for the distance sensor.
The light sensor continuously detects a first brightness of light and a first color temperature of light within the venue, and then the first brightness and the first color temperature are written into a third buffer created for the light sensor.
The camera continues to collect video data to the venue, and then the video data is written into a fourth buffer created for the camera.
In another embodiment of the present invention, step 102 may include the steps of:
step 1024, initializing a plurality of sensors in the intelligent learning device, respectively.
In this embodiment, each sensor in the intelligent learning device may be initialized in the same way as the operating system.
In one example, the flow of initializing the sensor is as follows:
1. the sensor manager object SensorManager sensorMgr is acquired.
In a particular implementation, a SENSOR manager object may be obtained by calling a method (SENSOR manager) this.
2. An object of the sensor is acquired.
Taking the gravity sensor as an example, a method provided by an operating system may be called for acquiring an object of the gravity sensor by a sensor mgr.
3. A listener is created and a listening method of the sensor is implemented.
The data of the sensor is stored in a value array and can be accessed through an object of the sensor event. There are two monitoring methods, one is triggered when the parameters are changed, and the other is triggered after the accuracy is changed.
Taking a gravity sensor as an example, a listener sensor eventlist is created, and events of parameter change are listened to, wherein the events are respectively x-value sensor manager.data_ X, y-value sensor manager.data_ Y, Z-value sensor manager.data_z.
4. The registration sensor listens.
The operating system provided method sensor mgr. Registralistener (lsn, sensor mgr. Sensor_delay_ui) registration sensor snoop may be invoked.
Three entry parameters, the first is a listener object, the second is a sensor object, all of which have been acquired, the last option is to set the speed:
sensor_delay_fast: the speed is the fastest and very sensitive;
sensor_delay_game: this is used when playing the game;
sender_delay_normal: slower;
sender_delay_ui: slowest.
Step 1025, detecting the states of the sensors respectively.
Each sensor registers a callback function in a driver program, and the state of each sensor is obtained by calling the callback function of each sensor, so that whether the sensor is abnormal or not is detected.
Step 1026, if initializing at least one sensor fails or the status of at least one sensor is abnormal, executing a first alarm operation.
If at least one sensor fails to initialize, or the state of at least one sensor is detected to be abnormal, namely the sensor is damaged and cannot work, a first alarm operation can be executed on the sensor through a notification bar in a screen of the intelligent learning equipment and other modes, a user is prompted that the sensor fails to initialize or the state is abnormal, eye protection service cannot be normally provided, and the eye protection service is ended.
Step 103, correcting scene data under the constraint of a learning scene; if the correction is completed, step 104 is performed.
In general, the sensors work independently, scene data related to eyes of a user are collected independently for a place, an intelligent learning device and the user, and in consideration of factors such as installation positions, precision and noise of the sensors, even if the sensors collect the scene data for the same learning scene synchronously, certain deviation may exist among various scene data for expression of the same thing, and the sensors are generally configured with a general driver, so that the scene data is detected by using a general method, and the driver is not necessarily adjusted for characteristics of the learning scene, so that attention of the scene data is not necessarily focused on an action of browsing the intelligent learning device by the user.
Therefore, in this embodiment, the scene data may be read from the buffer, and corrected with the learning scene as a constraint of integrity, so that the scene data may uniformly express the same thing, so that the attention of the scene data is focused on the action of the user browsing the intelligent learning device.
Further, after the scene data is read from the buffer, filtering operations such as clipping filtering, median filtering, arithmetic average filtering, recursive average filtering, clipping average filtering, anti-shake filtering, etc. may be performed on the scene data to filter the invalid original scene data.
If the remaining raw scene data is not null, i.e., at least a portion of the raw scene data is valid, a corrective action may be further performed on the remaining raw scene data.
If the remaining original scene data are empty, that is, all the original scene data are invalid, executing the second alarm operation on the sensor in a mode of a notification bar in a screen of the intelligent learning device, prompting a user that the sensor is abnormal, failing to normally provide the eye protection service, and ending the eye protection service.
In one embodiment of the present invention, where the scene data includes the pose of the intelligent learning device, then in this embodiment, step 103 may include the steps of:
step 10311, for each reference data, searching a gradient factor curve configured for the reference data.
As shown in fig. 3, some sensors are fixedly arranged in the intelligent learning device and immovable, generally collect scene data δ within a certain range, mainly collect scene data α in front of the intelligent learning device, and a user in many cases adjusts a back support or places an object on the back so as to place the intelligent learning device on a desk, a dining table or the like in an inclined manner, and the user browses the intelligent learning device in a horizontal direction, in which case, the sensors also incline along with the inclination of the intelligent learning device, and the collected scene data is mainly concentrated in information of the inclined upper part α, not in the horizontal direction β.
Therefore, in this embodiment, a portion of the sensors having a constraint relationship with the posture of the intelligent learning device may be selected, the collected scene data may be recorded as reference data, that is, the reference data may be other scene data having a constraint relationship with the posture, experiments may be performed on the intelligent learning device in advance, the intelligent learning device may be put into various postures, so that under various postures, the scene data directly in front of the sensor and the scene data in the horizontal direction may be detected, the conversion relationship between the scene data directly in front of the sensor and the scene data in the horizontal direction may be calculated, and the conversion relationship may be recorded as an adjustment factor, and the adjustment factors under the respective postures may form a gradient factor curve, that is, the gradient factor curve may be used to represent the mapping relationship between the posture and the reference data, and the mapping relationship may be a linear relationship or a nonlinear relationship.
The gradient factor curves are stored in the positions of nonvolatile memory partitions and the like in the intelligent learning equipment, and when the intelligent learning equipment provides eye protection service, the gradient factor curves configured for each reference data are searched for in the positions of nonvolatile memory partitions and the like according to each reference data.
In one example, the sensors provided in the intelligent learning device include a distance sensor and a light sensor, the distance sensor and the light sensor are fixedly arranged in the intelligent learning device and are not movable, and a constraint relation exists between the distance sensor and the light sensor and the posture of the intelligent learning device, and accordingly, the reference data includes a first distance between a user and the intelligent learning device, a first brightness of light in a place and a first color temperature of the light.
In this example, a gradient factor curve may be set in advance for the first distance detected by the distance sensor, and is noted as a first gradient factor curve, and the first gradient factor curve contains adjustment factors for the first distance correction in different attitudes, and is noted as a distance factor.
At this time, for the first distance, a gradient factor curve disposed for the first distance is searched as a first gradient factor curve.
The gradient factor curve can be set in advance for the first brightness of the light detected by the light sensor and is marked as a second gradient factor curve, and the second gradient factor curve comprises adjusting factors for correcting the first brightness under different postures and is marked as brightness factors.
At this time, for the first luminance, a gradient factor curve configured for the first luminance is searched as a second gradient factor curve.
The gradient factor curve can be set in advance for the first color temperature of the light detected by the light sensor and is marked as a third gradient factor curve, and the third gradient factor curve comprises adjusting factors for correcting the first color temperature under different postures and is marked as a color temperature factor.
At this time, a gradient factor curve disposed for the first color temperature is searched for as a third gradient factor curve for the first color temperature.
Of course, the above reference data and the gradient factor curves thereof are merely examples, and other reference data and gradient factor curves thereof may be set according to actual situations when the present embodiment is implemented, which is not limited thereto. In addition, in addition to the above-mentioned reference data and the gradient factor curves thereof, those skilled in the art may also use other reference data and gradient factor curves thereof according to actual needs, which is not limited in this embodiment.
Step 10312, searching the adjustment factors of the posture mapping in the gradient factor curve.
And substituting the current gesture into a gradient factor curve corresponding to the sensor aiming at the given sensor, so as to obtain the adjustment factor of the gesture mapping.
In one example, the gradient factor curves include a first gradient factor curve set for the distance sensor, a second gradient factor curve set for the light sensor, and a third gradient factor curve.
In this example, a first distance between the user and the intelligent learning device is substituted into a first gradient factor curve, and an adjustment factor of the gesture map is found in the first gradient factor curve as a distance factor.
Substituting the first brightness of the light in the place into a second gradient curve, and searching an adjusting factor of the attitude mapping in the second gradient factor curve to serve as a brightness factor.
And searching an adjusting factor of the posture mapping in a third gradient factor curve in a second gradient curve of the first color Wen Dairu of the light in the place to serve as a color temperature factor.
Step 10313, adjusting the reference data according to the adjustment factor.
If the adjustment factor under the posture of the current intelligent learning device is determined, the reference data can be corrected by using modes such as weight adjustment, function mapping and the like, so that new reference data can be obtained.
In one example, the adjustment factor includes a distance factor for correcting a first distance between the user and the intelligent learning device, a brightness factor for correcting a first brightness of the light within the venue, and a color temperature factor for correcting a first color temperature of the light within the venue, then in this example, the first distance may be multiplied by the distance factor, as a new first distance, the first brightness may be multiplied by the brightness factor, as a new first brightness, and the first color Wen Chengyi color temperature factor, as a new first color temperature.
In another embodiment of the present invention, the scene data includes video data collected from a venue, and in this embodiment, step 103 may include the steps of:
step 10321, performing an object detection operation in the video data with the person as an object, to obtain a detection result.
In a learning scene, a user learns in a venue facing an application displayed by intelligent learning equipment, and people in the venue are not necessarily only the user, but also other people can exist, and video data collected by a camera can collect other people besides the user.
For example, a student (user) places an intelligent learning device on a desk at home (place), learns towards an application displayed by the intelligent learning device, and parents (other people) may pass near the desk when doing things such as housekeeping, and video data collected by a camera may be collected by the parents in addition to the student.
In order to filter the interference of people other than the user, in this embodiment, a target detection network based on deep learning may be constructed and trained in advance, and for video data, part or all of image data of the video data may be input into the target detection network to perform a target detection operation, so as to obtain detection results, where each detection result may include an identifier of the person or not, and may include an area where the person is located in the image data.
Further, the structure of the target detection network is not limited to the artificially designed neural network, but can be optimized by a model quantization method, searched by a NAS (Neural Architecture Search, neural network structure search) method, or the like, which is not limited thereto in this embodiment.
In a specific implementation, the target detection network may be one-stage and two-stage.
two-stage belongs to the segment-to-segment, and the target detection operation is completed in two steps, wherein the first step is to use various convolutional neural networks as backbones of the target detection network, extract features from the original image data, perform rough classification (distinguishing foreground and background) and rough positioning (anchor) according to the features, and acquire candidate areas, and the second step is to classify the candidate areas (i.e. whether people exist) in a classification network of the target detection network.
Illustratively, the target detection operations of two-stage may include R-CNN (Region-CNN, region-based convolutional neural network), fast R-CNN (Fast Region convolutional neural network), fast R-CNN (Faster Region convolutional neural network), R-FCN (Region-based fully convolutional network, region-based convolutional network), and so on.
The one-stage belongs to an end-to-end type, which means that the target detection operation is completed in one step, candidate areas are not searched independently, image data are input into an integral network, and the generated detection result simultaneously contains the position and the category information of the person.
Illustratively, the one-stage object detection operation may include SSD (Single Shot Multibox Detector, single step multi-frame detection), YOLO (You Only Look Once, unified real-time object detection), and so on.
Generally, the two-stage has higher detection accuracy but slightly lower detection speed, and the one-stage has higher detection speed but slightly lower detection accuracy, so that a person skilled in the art can select one-stage or two-stage according to factors such as the resource of the self-learning detection device, the real-time requirement of detection, and the like, and the embodiment is not limited to this.
Step 10322, if the detected result includes at least two persons, calculating a total score representing the sitting posture for each person.
If the detection result is unmanned, it can mark that the learning user is empty.
If the detected result is a person, the person may be defaulted to be the user who is learning.
If the detection result is at least two characters, other characters which are not learned are shown in addition to the user who is learning, at the moment, the sitting posture of the user is correct in consideration of the fact that the user is more learned before sitting on the intelligent learning equipment, and the other characters can not present the correct sitting posture with high probability, so that gesture analysis can be carried out on each character in a statistical method, a template-based method, a grammar-based method and the like, and the total score for representing the sitting posture is calculated.
Among them, the statistical-based methods are mainly two methods of hidden markov model (Hidden Markov Model, HMM) and dynamic bayesian network (Dynamic Bayesian Network, DBN), and the grammar-based methods are mainly based on the related applications of image sequences. The template-based method includes image recognition techniques based on fast feature point extraction and description (Oriented FAST and Rotated BRIEF, ORB) algorithms.
In one way of detecting sitting postures, it is considered that the sitting postures of the persons are mainly concentrated on the face and body.
In one aspect, face keypoints may be detected for each person, the face of the user may be modeled according to the face keypoints, a first sub-score representing a level of the face may be configured for each person according to the model of the face, and a second sub-score representing a vertical level of the face may be configured for each person.
On the other hand, skeletal joints may be detected for each person, the user's body may be modeled according to the skeletal joints, a third sub-score representing the level of the body may be configured for each person according to the model of the body, and a fourth sub-score representing the vertical level of the body may be configured for each person.
The sum between the first sub-score, the second sub-score, the third sub-score, and the fourth sub-score is calculated for each person as a total score representing the sitting posture.
In the method, the sitting posture of the person is detected based on the face and the body of the person, so that the accuracy of the sitting posture can be ensured, the operation amount is reduced, the time consumption of operation is reduced, and the instantaneity of the eye protection service is ensured.
Step 10323, marking the character with the highest total score as the user.
Comparing the total score of each person, wherein the total score of each person represents the correction degree of the sitting postures of the person, namely, the total score is positively correlated with the correction degree of the sitting postures of the person, the higher the total score is, the higher the correction degree of the sitting postures of the person is, otherwise, the lower the total score is, the lower the correction degree of the sitting postures of the person is, and if the total score of a certain person is highest, the correction degree of the sitting postures of the person in all the persons is highest, and at the moment, the person can be marked as a user who is learning.
In one embodiment of the invention, the scene data includes video data collected from a venue, a first distance between a user and the intelligent learning device; in this embodiment, step 103 may include the steps of:
step 10331, calculating a second distance between the intelligent learning device and the face of the user based on the video data.
For the camera, the internal parameters and the external parameters can be marked in advance, parameters such as focal length can be read when the video data are acquired, the multi-frame image data in the video data have images of the user, the face of the user can be positioned by performing face detection in the video data, the images of the face of the user are detected in a three-dimensional vision mode, a motion ranging mode, a monocular ranging mode and the like, and the detected distance is recorded as a second distance between the intelligent learning equipment and the face of the user.
The stereo vision is a stereo perception analysis method imitating human beings, and binocular or multi-view cameras observe the same scenery (namely the face of a user) at different viewpoints to acquire two-dimensional image data shot at different viewpoints. The three-dimensional information of the scenery is obtained by calculating the position deviation among the pixels of the image, namely parallax by the principle of triangulation. Stereoscopic vision is divided into image acquisition, camera calibration, feature extraction, stereoscopic matching, depth determination, interpolation and the like.
The motion ranging method acquires continuous two-dimensional image data of a target (i.e., a user's face) at different times or different spatial positions using a monocular camera. The distance and other parameters of the target are calculated from the temporal and spatial variations of the target in the sequence of two-dimensional image data.
Motion ranging requires finding corresponding points in different image data. The distance and size parameters of the object are calculated by finding the corresponding features of the object (i.e., the user's face) and calculating the amount of deviation between them.
The ranging method based on image processing in monocular ranging comprises the following steps: focusing ranging and defocusing ranging.
The focusing distance measurement (DFF) captures a series of image data of a target (i.e., a user's face) by adjusting optical parameters, finds the most clear image of the target in the image data, and calculates the distance of the target according to the parameters of the image data by using the imaging principle of geometrical optics.
Defocus distance measurement (Depth from Defocus, DFD) is a method of obtaining depth information (i.e., distance) of an object (i.e., a user's face) from out-of-focus image data. According to the principle that the image data is blurred when the defocus degree of the object is larger, two or three pieces of image data shot under different optical parameters are utilized to determine the diffusion parameters of the diffusion function of the scattered focus, and depth calculation is carried out according to the relation between the defocus diffusion parameters and the object distance.
Furthermore, the video data has multiple frames of image data, a second distance between the key frame intelligent learning device and the face of the user can be selected, a part of image data can be selected in a fixed frame skipping mode to measure the second distance between the intelligent learning device and the face of the user, and the second distances between all frames of intelligent learning devices and the face of the user can also be selected.
If a plurality of second distances are measured for the video data, a statistically significant value may be calculated as the final second distance by calculating an average value, weighted summation, or the like for the second distances.
Step 10332, fusing the first distance and the second distance into a new first distance.
In practical application, a user generally sits a certain distance in front of the intelligent learning device, and the effective detection range of the distance sensor is much shorter, such as 50 cm, and the user may move the body in the learning process, so that the distance sensor detects a first distance between the intelligent learning device and a different body part of the user, and therefore, when the distance sensor detects the first distance between the intelligent learning device and the user, larger fluctuation may exist, which affects the accuracy of the first distance.
In this embodiment, the first distance and the second distance may be fused linearly or nonlinearly, and the value after fusion is the new first distance, so as to improve the accuracy of the first distance.
Taking linear fusion as an example, the actual distance between the intelligent learning device and the user can be marked in advance through an experimental mode, a distance sensor is called to detect a first distance between the intelligent learning device and the user, a camera is called to collect video data, a second distance between the intelligent learning device and the face of the user is calculated in the video data, a first correlation between the actual distance and the first distance is calculated, a second correlation between the actual distance and the second distance is calculated, a first coefficient is configured for the first distance according to the first correlation, a second coefficient is configured for the second distance according to the second correlation, and the first coefficient and the second coefficient can be stored in a nonvolatile memory and the like of the intelligent learning device.
Then, when the lines are fused, the preset first coefficient and the preset second coefficient can be searched in the nonvolatile memory and other positions of the intelligent learning equipment. The first coefficient is configured to the first distance, the second coefficient is configured to the second distance, the product between the first distance and the first coefficient is calculated, the product between the second distance and the second coefficient is calculated as the first weight adjustment distance, and the product between the second distance and the second coefficient is calculated as the second weight adjustment distance, so that the sum value between the first weight adjustment distance and the second weight adjustment distance is calculated, and the new first distance is obtained.
If the user is not detected in the video data, the face of the user cannot be detected, and at this time, the first distance acquired at the same time point may be marked invalid and discarded.
And 104, formulating eye protection measures suitable for the learning scene according to the scene data.
The corrected scene data can more accurately and uniformly express the same thing in the same learning scene, and the attention of the scene data can be focused on the action of browsing the intelligent learning equipment by a user, so that the scene data can be used for formulating eye protection measures suitable for the current learning scene.
The eye protection measures may be measures aimed at protecting eyes, and the eye protection measures may be hardware-level eye protection measures and/or software-level eye protection measures.
By hardware-level eye protection, it is meant eye protection implemented on smart device hardware (e.g., screen, speaker, etc.).
By software-level eye protection, it is meant eye protection implemented on smart device software (e.g., operating system, applications, etc.).
Further, the eye protection measures applicable to the learning scene may be formulated by using a certain scene data alone, or the eye protection measures applicable to the learning scene may be formulated by using at least two scene data in combination, which is not limited in this embodiment.
In one embodiment of the present invention, step 104 may include the steps of:
step 1041, determining the type of application in the learned dimension.
In the present embodiment, different applications may be divided into a plurality of types in the dimension of learning according to the requirements of the eye care service in advance, that is, these types broadly represent the characteristics of a certain learning.
In one example, the types include an interaction type, a class type, and a reading type.
Wherein, the application of the interaction type represents that the application and the user can interact with each other for learning. For example, an application primarily provides learning services in a certain language, a user interacts with the application about the strokes of mouth shapes, words, an application primarily provides drawing services, a user interacts with the application about tools, drawings, and so forth.
The class-type application means that the application provides a service to a user in the form of class. For example, an application mainly provides vocational education, and a teacher develops classroom teaching in a live broadcast or recorded broadcast form or the like to a plurality of users for knowledge points to be learned.
The reading type application means that the application provides the user with readable learning contents, which are mainly text information, and in addition, can assist contents such as image data, video data, and the like. For example, an application primarily provides reading services, providing users with premium material that adapts to their grade or age group.
Of course, the above types are merely examples, and other types may be set according to actual situations when the present embodiment is implemented, which is not limited thereto. In addition, other types than the above-described ones can be adopted by those skilled in the art according to actual needs, and this embodiment is not limited thereto.
For applications of divided types, an identification (such as package name) of the application can be identified, and a mapping relation is established between the identification and the type of the application, and the identification and the type of the application are recorded in an application classification list.
The application classification list is generally maintained by a server, and the intelligent learning device can download the application classification list from the server when the eye-protection service is used for the first time, and continuously update the application classification category from the server later.
When the intelligent learning device provides the eye protection service, an application which is currently used by a user can be detected, the application is generally an application positioned at the uppermost layer of an operating system, the identification of the current application is identified, for example, the package name of the application is read by calling a system package management service (PacakgeManageManagerServices) and an activity task service (ActivityTaskManageService) to serve as the identification of the application, the identification of the current application is compared with the identification of the application in the application classification list, if the identification of the current application is identical with the identification of a certain application in the application classification list, the type of the identification mapping of the application in the application classification list can be extracted, and the type of the application is allocated as the type of the attribution of the current application.
Step 1042, formulating eye protection measures which are suitable for learning the scene and are adapted to the type according to the scene data.
On the basis of intelligent learning equipment, places and users, the embodiment adds the type of application as one of factors for producing eye protection measures, different types of applications can reflect the characteristic of user learning to a certain extent, and on the basis of formulating eye protection measures suitable for learning scenes according to scene data, fine adjustment is carried out on the eye protection measures according to the characteristic, so that the eye protection measures are matched with the types of the application.
In one example, the scene data includes a first distance between the user and the intelligent learning device, a first brightness of the light within the venue, and a first color temperature of the light.
In this example, the eye-protection measure includes a brightness adjustment measure for adjusting the screen to a second brightness, i.e., the second brightness is a prefire value of the screen brightness.
In the learning scene, the second brightness of the screen has strong correlation with the first distance, the first brightness and the first color temperature, and the three are acted on the second brightness of the screen together, and in general, the second brightness of the screen is positively correlated with the first distance, the first brightness and the first color temperature of the screen, namely, the longer the first distance between the user and the intelligent learning device is, the higher the first brightness of the light in the place is, the higher the first color temperature of the light in the place is, and the higher the second brightness of the screen is, otherwise, the shorter the first distance between the user and the intelligent learning device is, the lower the first brightness of the light in the place is, and the lower the first color temperature of the light in the place is, and the lower the second brightness of the screen is.
The strong correlation may be expressed in a linear function manner, and then, in this example, different types of applications may be placed in different first distances, first brightnesses, and first color temperatures through experiments in advance to perform environment differentiation and stability tests, and the second brightness of the screen is adjusted, so that a first weight is configured for the first brightness, a second weight is configured for the first color Wen Peizhi, and a third weight is configured for the first distance, and an adaptation relationship between the type and the first weight, the second weight, and the third weight is established and stored in a nonvolatile memory or the like of the smart device.
Further, in general, the correlation between the first luminance of the light in the place and the second luminance of the screen is significantly higher than the correlation between the first color temperature of the light in the place and the second luminance of the screen, and therefore, the first weight is generally greater than the second weight and the third weight.
Of course, the first weight, the second weight and the third weight belong to adjustable parameters, and the user can adjust the configuration page of the operating system according to the own requirement, which is not limited in this embodiment.
When the brightness adjustment measure is generated, the first weight, the second weight and the third weight which are matched with the type can be queried in the nonvolatile memory and other positions of the intelligent device.
Calculating the product between the first brightness and the first weight to be used as the first weight adjusting brightness; calculating the product between the first color temperature and the second weight as a first weight-adjusting color temperature; and calculating the product between the first distance and the third weight as a third weight adjustment distance.
And mapping the sum value among the first weight-adjusting brightness, the first weight-adjusting color temperature and the third weight-adjusting distance into the second brightness of the screen in the intelligent learning equipment.
The sum value among the first weight-adjusting brightness, the first weight-adjusting color temperature and the third weight-adjusting distance is recorded as a brightness input key source value, a screen brightness curve is generated for a screen in an experiment, the brightness input key source value is substituted into the screen brightness curve, and the second brightness (namely, the prefire value of the screen brightness) of the screen is generated in a key value matching mode and the like, and the method is expressed as follows:
< key, value > = < luminance input key source value, prefire value of screen luminance >
Luminance input key source value=first luminance×first weight+first color temperature×second weight+first distance×third weight
At this time, a brightness adjustment measure may be generated based on the second brightness as an eye protection measure applicable to the learning scene.
Furthermore, the types of applications include an interaction type, a class type and a reading type, and under the condition that a first distance between a user and the intelligent learning device is set between a first brightness of light in a place and a first color temperature of the light, a second brightness corresponding to the interaction type is higher than a second brightness corresponding to the class type, and a second brightness corresponding to the class type is higher than a second brightness corresponding to the reading type.
Considering that when the application of the interactive type is running, the user may have more limb interactions with the intelligent learning device, so that the screen is brighter (appears as the second brightness is highest), and the screen is convenient for the user to operate, while when the application of the reading type is running, in order to be closer to the book effect, the brightness of the screen appears softer (appears as the second brightness is lowest), while when the application of the classroom type is running, the user may have less limb interactions with the intelligent learning device (such as speaking by hand, etc.), the reading operation (such as doing a classroom problem, etc.), and the brightness of the ping-pong is between the two (appears as the second brightness is between the two).
In another example, the scene data includes a first brightness of light within the venue and a first color temperature of the light.
In this example, the eye protection measure includes a color temperature adjustment measure for adjusting the screen to a second color temperature, i.e., the second color temperature is a prefire value of the screen color temperature.
In a learning scene, the second color temperature of the screen has strong correlation with the first brightness of the light in the place and the first color temperature of the light in the place, and the two have a common effect on calculating the second color temperature of the screen.
The strong correlation may be expressed in a linear function manner, and then in this example, environment differentiation and stability test may be performed on different types of applications under different first brightness and first color temperature through experiments, and the second color temperature of the screen may be adjusted, so as to configure a fourth weight for the first brightness and a fifth weight for the first color Wen Peizhi, and establish an adaptation relationship between the type and the fourth weight and the fifth weight, and store the adaptation relationship in a nonvolatile memory of the smart device.
Of course, the fourth weight and the fifth weight belong to adjustable parameters, and the user can adjust the configuration page of the operating system according to the own requirement, which is not limited in this embodiment.
When the brightness adjustment measure is generated, the fourth weight and the fifth weight which are matched with the type can be queried in the nonvolatile memory and other positions of the intelligent device.
Calculating the product between the first brightness and the fourth weight as the second weight adjustment brightness; the product between the first color temperature and the fifth weight is calculated as the second tuning color temperature.
And mapping the sum value between the second weight-adjusting brightness and the second weight-adjusting color temperature to a second color temperature of a screen in the intelligent learning device.
The sum value among the first weight-adjusting brightness, the first weight-adjusting color temperature and the third weight-adjusting distance is recorded as a brightness input key source value, a screen color temperature curve is generated for a screen in an experiment, the color temperature input key source value is substituted into the screen color temperature curve, and a second color temperature (namely a prefire value of the screen color temperature) of the screen is generated in a key value matching mode and the like, and the method is expressed as follows:
< key, value > = < color temperature input key source value, prefire value of screen color temperature >
Color temperature input key source value=first luminance×fourth weight+first color temperature×fifth weight
At this time, a color temperature adjustment measure may be generated based on the second color temperature as an eye protection measure suitable for the learning scene.
Further, the types of the applications include interaction type, class type and reading type; the second color temperature corresponding to the interaction type has smoothness, the second color temperature corresponding to the class type has smoothness, and the second color temperature corresponding to the reading type tends to be warm.
In the learning scene, the content displayed by the interactive application has larger difference, so that the second color temperature of the screen has smoothness so as not to influence the color gamut value of the content originally displayed, and the color temperature curve of the screen is smoother; the content displayed by the class type application has larger difference, so that the second color temperature of the screen has smoothness so as not to influence the color gamut value of the content originally displayed, and the color temperature curve of the screen is smoother; the interactive type application displays less content difference, and in order to fit the effect of book paper, the second color temperature of the screen tends to be warm, so that the color temperature curve of the screen is increased to be close to the warming color of the paper.
Further, the screen of the intelligent learning device displays the user interface of the application, and the second brightness and the second color temperature are combined on the unified user interface, that is, the second brightness and the second color temperature are unified with the display effect.
The second luminance and the second color temperature may be calculated by the method in the above example, or may be calculated by other methods, which is not limited in this example.
In this example, the screen of the intelligent learning device may be tested in advance through experiments, and the screen of the intelligent learning device may be adjusted to a suitable brightness and color temperature, and at this time, a correlation coefficient may be calculated for the brightness and the color temperature and recorded as a reference value.
When the second brightness and the second color temperature are formulated for the screen in the intelligent learning device in different modes, a correlation coefficient between the second brightness of the screen in the intelligent learning device and the second color temperature of the screen in the intelligent learning device can be calculated.
And calculating a difference value between the correlation coefficient and the reference value, and comparing the difference value with a preset first threshold value.
If the difference between the correlation coefficient and the preset reference value is smaller than the preset first threshold value, which means that the second brightness and the second color temperature are close to the reasonable brightness and the reasonable color temperature in the experiment, the second brightness and the second color temperature can be determined to be effective.
If the difference between the correlation coefficient and the preset reference value is greater than or equal to the preset first threshold value, which means that the difference between the preset second brightness and the second color temperature and the experimentally reasonable brightness and color temperature is greater, the second brightness and/or the second color temperature can be corrected, for example, the second brightness is increased or decreased according to the preset first step length, and/or the second color temperature is increased or decreased according to the preset second step length, and the correlation coefficient between the second brightness of the screen in the intelligent learning device and the second color temperature of the screen in the intelligent learning device is recalculated until the difference between the correlation coefficient and the preset reference value is smaller than the preset first threshold value.
In one embodiment of the present invention, the scene data comprises venue-oriented captured video data, and step 104 may comprise the steps of:
step 10421, querying a second threshold adapted to the type.
In the present embodiment, the eye protection measures include a sitting posture protection measure for correcting the sitting posture of the user.
In the embodiment, different types of applications can be loaded in the intelligent learning device through experiments in advance, the distance between the user and the intelligent learning device is tested for the different types of applications, the proper distance is selected for each type of application and is recorded as a second threshold, the second threshold can be stored in the positions of nonvolatile storage partitions and the like in the intelligent learning device, and when the intelligent learning device provides eye protection services, the second threshold configured for each type is searched for in the positions of the nonvolatile storage partitions and the like for each type of application.
The types of applications include, for example, interactive type, class type, reading type.
The second threshold value corresponding to the interaction type is larger than the second threshold value corresponding to the class type and the second threshold value corresponding to the reading type.
Under the application operation of the interaction type, the user generally has more limb interactions with the intelligent learning equipment, and the sitting posture of the intelligent learning equipment is changed frequently, so that the second threshold value of the intelligent learning equipment is larger than the second threshold value corresponding to the class type and the second threshold value corresponding to the reading type, namely, the second threshold value of the interaction type is relatively loose, and the probability of misjudgment is reduced.
Step 10422, obtaining a total score of the user in the video data.
Considering that the sitting posture of the user is more correct when the user sits on the intelligent learning device and learns before sitting on the intelligent learning device, gesture analysis can be performed on each person in a statistical method, a template-based method, a grammar-based method and the like, and the correction degree representing the sitting posture is calculated and used as a total score, namely the total score is used for representing the sitting posture of the user.
In one way of detecting sitting postures, it is considered that the sitting postures of the persons are mainly concentrated on the face and body.
In one aspect, face keypoints may be detected for each person, the face of the user may be modeled according to the face keypoints, a first sub-score representing a level of the face may be configured for each person according to the model of the face, and a second sub-score representing a vertical level of the face may be configured for each person.
On the other hand, skeletal joints may be detected for each person, the user's body may be modeled according to the skeletal joints, a third sub-score representing the level of the body may be configured for each person according to the model of the body, and a fourth sub-score representing the vertical level of the body may be configured for each person.
The sum between the first sub-score, the second sub-score, the third sub-score, and the fourth sub-score is calculated for each person as a total score representing the sitting posture.
If the total score indicating the user's sitting posture correction has been calculated previously, the total score may be written into the buffer, at which point the total score indicating the user's sitting posture correction may be read from the buffer.
And 10423, if the total score is less than or equal to the second threshold, generating a sitting posture protection measure as an eye protection measure suitable for the learning scene.
Comparing the total score with a second threshold value of a corresponding type, if the total score is larger than the second threshold value, the sitting posture of the user is correct relative to the application of the type, and if the total score is smaller than or equal to the second threshold value, the sitting posture of the user is wrong relative to the application of the type, and at the moment, a sitting posture protection measure can be generated as an eye protection measure applicable to the current learning scene.
Further, in the process of calculating the total score, a first sub-score indicating the level of the face, a second sub-score indicating the vertical level of the face, a third sub-score indicating the horizontal level of the body, and a fourth sub-score indicating the vertical level of the body are calculated, one or more scores having the lowest value are selected from the four scores (i.e., the first sub-score, the second sub-score, the third sub-score, and the fourth sub-score), and the position state (the horizontal level of the face, the vertical level of the face, the horizontal level, and the vertical level of the body) indicated by the score is set as the position state of the sitting posture error in the sitting posture protection measure.
In one embodiment of the present invention, step 104 may include the steps of:
step 10431, querying a reference range set for the scene data.
In this embodiment, the intelligent learning device may be tested in advance through experiments, a reference range may be set for the scene data, the scene data may be represented in a suitable range, and a mapping relationship between the scene data (type) and the reference range may be recorded.
In generating the eye-protection measure, a reference range set for the scene data may be queried.
Step 10432, comparing the scene data with a reference range.
Step 10433, if the scene data is outside the reference range, generating an eye protection measure suitable for learning the scene.
The current scene data is compared with its corresponding reference range.
If the scene data is within the reference range, the scene data is indicated to be suitable, and eye protection measures suitable for learning the scene are not generated.
If the scene data is within the reference range, the scene data is not suitable, and may be too large or too small, and at this time, eye protection measures suitable for learning the scene are generated.
In one example, if the scene data is a first distance between the user and the intelligent learning device, and the first distance is outside a reference range, indicating that the distance between the user and the intelligent learning device is too short or too long, generating a distance protection measure as an eye protection measure applicable to the learning scene, wherein the distance protection measure is used for prompting that the first distance between the user and the intelligent learning device is unsuitable.
In another example, if the scene data is a first luminance of the light in the location, and the first luminance is outside the reference range, indicating that the light in the location is too bright or too dark, a luminance protection measure is generated as an eye protection measure applicable to the learning scene, wherein the luminance protection measure is used to indicate that the first luminance of the light in the location is not suitable.
In one embodiment of the present invention, step 104 may include the steps of:
step 10441, determining the type of application in the learned dimension.
In the present embodiment, different applications may be divided into a plurality of types in the dimension of learning according to the requirements of the eye care service in advance, that is, these types broadly represent the characteristics of a certain learning.
Illustratively, the types include an interaction type, a class type, and a reading type.
Step 10442, if the type is a reading type, generating a reading protection measure as an eye protection measure suitable for the learning scene.
In this embodiment, the eye protection measures include a reading protection measure for adding a layer imitating paper over the application.
Aiming at the reading type application, the content displayed on the user interface is text information, the intelligent learning device can prompt the user whether to start the reading protection measures in a popup window mode, and if the user selects to agree to start the reading protection measures, the reading protection measures are generated, and the selection of the user is recorded.
Further, the user can adjust the intensity of the paper texture of the layer by setting the transparency ratio.
Step 105, performing eye protection measures in the intelligent learning device to protect eyes of the user in the learning scene.
In this embodiment, the respective eye protection measures are issued to the corresponding hardware and/or software in the intelligent learning device, which are executed by these hardware and/or software such that the eyes of the user are protected in the learning scenario.
If the eye protection measure is a brightness adjustment measure, the brightness adjustment measure can be issued to a screen of the intelligent learning device as a command, and the screen is adjusted to the second brightness.
The brightness adjusting measure is to call a screen driver to set the brightness, so as to achieve the adaptability of human eyes, the screen driver can perform transition type adjustment on the brightness, namely gradually increasing or decreasing from the current brightness until reaching the second brightness, and the eye damage of the user caused by the instant change of the brightness is avoided.
If the eye protection measure is a color temperature adjustment measure, the color temperature adjustment measure can be issued to a screen of the intelligent learning device as a command, and the screen is adjusted to a second color temperature.
The color temperature adjusting measure is to call a screen driver to set the brightness, so as to achieve the adaptability of human eyes, the screen driver can perform transition type adjustment on the color temperature, namely gradually increasing or decreasing the current color temperature until reaching the second brightness, and the problem that the eyes of a user are damaged due to the instant change of the color temperature is avoided.
If the eye protection measures are sitting posture protection measures, the sitting posture protection measures can be used as commands to be issued to a loudspeaker and/or an operating system of the intelligent learning equipment, and the sitting posture of the user is warned from the voice and/or the words, so that the user is prompted to have an incorrect sitting posture.
Illustratively, the content of the alert may be "please not warp the head", "please raise the head a little", "sit right", etc.
If the eye protection measure is a distance protection measure, the distance protection measure can be issued to a loudspeaker and/or an operating system of the intelligent learning device as a command, and a first distance between the user and the intelligent learning device is warned in terms of voice and/or characters to prompt that the first distance between the user and the intelligent learning device is unsuitable (too close or too far).
Illustratively, the content of the alert may be "please get away from the screen", and so on.
If the eye protection measure is a brightness protection measure, the brightness protection measure can be used as a command to be issued to a loudspeaker and/or an operating system of the intelligent learning equipment, and the first brightness of the light in the place is warned from the voice and/or the text to prompt that the first brightness of the light in the place is unsuitable (too dark or too bright).
If the eye protection measures are reading protection measures, a window management service in the operating system can be configured to superimpose a layer simulating paper (namely, the texture effect is the same as or similar to that of the paper) on the top of the display layer of the intelligent learning equipment, so that the display effect of the simulated paper can be achieved.
Typically, the size of the layer is the same as the size of the user interface of the application, such that the layer overlays the user interface of the application, which is full screen by default, as the user interface of the application is full screen by default.
Of course, in the case of split screen, etc., the user interface of the application will change, and at this time, the layer changes with the change of the user interface of the application, and the user interface of the application is maintained to be covered.
Furthermore, if a plurality of eye protection measures to be alarmed exist at the same time, the alarmed content can be written into a preset alarm queue, and the intelligent learning equipment can sequentially read the alarmed content from the alarm queue to alarm, so that resource impulse of the alarm is avoided.
In the embodiment, a learning scene is determined, wherein the learning scene learns for an application displayed by a user facing the intelligent learning equipment in a place; synchronously collecting scene data related to eyes of a user from a place, intelligent learning equipment and the user; correcting scene data under the constraint of a learning scene; if the correction is completed, making eye protection measures suitable for the learning scene according to the scene data; eye protection measures are performed in the intelligent learning device to protect the eyes of the user in the learning scene. According to the embodiment, comprehensive scene data are detected aiming at three main elements in a learning scene, the scene data are corrected, deviation among the scene data can be eliminated, the scene data are unified in the same learning scene, eye protection requirements under the focus learning scene are met more pertinently, the eye protection effect is improved, and the eye health of a user is practically protected.
Example two
Fig. 4 is a schematic structural diagram of an eye protection device according to a second embodiment of the present invention. The device is applied to intelligent learning equipment, as shown in fig. 4, and comprises:
a learning scenario determining module 401, configured to determine a learning scenario, where the learning scenario is that a user learns in a place for an application displayed by the intelligent learning device;
a scene data collection module 402, configured to collect scene data related to eyes of the user from the venue, the intelligent learning device, and the user in synchronization;
a scene data correction module 403, configured to correct the scene data under the constraint of the learning scene;
an eye-protection measure generating module 404, configured to formulate an eye-protection measure applicable to the learning scene according to the scene data if the correction is completed;
an eye protection measure execution module 405, configured to execute the eye protection measure in the intelligent learning device, so as to protect eyes of the user in the learning scene.
In one embodiment of the present invention, the scene data collection module 402 includes:
the buffer zone creating module is used for creating a buffer zone for the sensor in the intelligent learning equipment;
The sensor calling module is used for synchronously calling the sensor to acquire scene data related to eyes of the user from the place, the intelligent learning equipment and the user;
and the scene data writing module is used for writing the scene data into the buffer area respectively.
In one embodiment of the invention, the sensor comprises an attitude sensor, a distance sensor, a light sensor and a camera;
the buffer creation module includes:
the first buffer zone creating module is used for creating a buffer zone for the attitude sensor and used as a first buffer zone;
the second buffer area creating module is used for creating a buffer area for the distance sensor and used as a second buffer area;
a third buffer creating module, configured to create a buffer for the light sensor, as a third buffer;
the fourth buffer area creating module is used for creating a buffer area for the camera and used as a fourth buffer area;
the sensor calling module comprises:
the gesture sensor calling module is used for calling the gesture sensor to detect the gesture of the intelligent learning device and used as scene data;
the distance sensor calling module is used for calling the distance sensor to detect a first distance between the user and the intelligent learning equipment as scene data;
The light sensor calling module is used for calling the light sensor to detect first brightness of light and first color temperature of the light in the place as scene data;
the camera calling module is used for calling the camera to acquire video data from the place and used as scene data;
the scene data writing module comprises:
the gesture writing module is used for writing the gesture into the first buffer area;
a distance writing module, configured to write the first distance into the second buffer area;
a brightness writing module, configured to write the first brightness and the first color temperature into the third buffer area;
and the video data writing module is used for writing the video data into the fourth buffer area.
In one embodiment of the present invention, the scene data collection module 402 further includes:
an initialization module for initializing a plurality of sensors in the intelligent learning device respectively;
the state detection module is used for respectively detecting the states of the sensors;
and the first alarm operation execution module is used for executing the first alarm operation if at least one sensor fails to be initialized or the state of at least one sensor is abnormal.
In one embodiment of the present invention, the scene data correction module 403 includes:
the gradient factor curve searching module is used for searching a gradient factor curve configured for the reference data aiming at each piece of reference data, wherein the reference data are other scene data with constraint relation with the gesture, and the gradient factor curve is used for representing the mapping relation between the gesture and the reference data;
the adjusting factor searching module is used for searching the adjusting factors of the gesture mapping in the gradient factor curve;
and the reference data adjusting module is used for adjusting the reference data according to the adjusting factors.
In one embodiment of the invention, the reference data includes a first distance between the user and the intelligent learning device, a first brightness of light within the venue, and a first color temperature of light;
the gradient factor curve searching module comprises:
the first gradient factor curve searching module is used for searching a gradient factor curve configured for the first distance according to the first distance and taking the gradient factor curve as a first gradient factor curve;
the second gradient factor curve searching module is used for searching a gradient factor curve configured for the first brightness aiming at the first brightness and taking the gradient factor curve as a second gradient factor curve;
The third gradient factor curve searching module is used for searching a gradient factor curve configured for the first color temperature aiming at the first color temperature and taking the gradient factor curve as a third gradient factor curve;
the adjustment factor lookup module includes:
the distance factor searching module is used for searching the adjusting factor of the gesture mapping in the first gradient factor curve to serve as a distance factor;
the brightness factor searching module is used for searching the adjusting factor of the gesture mapping in the second gradient factor curve to serve as a brightness factor;
the color temperature factor searching module is used for searching the adjusting factor of the attitude mapping in the third gradient factor curve to be used as a color temperature factor;
the reference data adjustment module includes:
a distance adjustment module for multiplying the first distance by the distance factor as a new first distance;
a brightness adjustment module for multiplying the first brightness by the brightness factor as a new first brightness;
and the color temperature adjusting module is used for taking the color temperature factor of the first color Wen Chengyi as a new first color temperature.
In one embodiment of the invention, the scene data includes video data collected to the venue; the scene data correction module 403 includes:
The person detection module is used for taking a person as a target, executing target detection operation in the video data and obtaining a detection result;
the sitting posture score calculating module is used for calculating the total score representing the sitting posture for each person if the detection result comprises at least two persons;
and the user marking module is used for marking the character with the highest total score as the user.
In one embodiment of the invention, the sitting posture score calculation module comprises:
a first sub-score configuration module for configuring a first sub-score representing a face level degree for each of the persons;
a second sub-score configuration module for configuring a second sub-score representing a face verticality for each of the persons;
a third sub-score configuration module for configuring a third sub-score representing a degree of physical level for each of the persons;
a fourth sub-score configuration module for configuring a fourth sub-score representing a degree of body verticality for each of the characters;
and the score summation module is used for calculating the sum value among the first sub-score, the second sub-score, the third sub-score and the fourth sub-score for each person as a total score representing sitting postures.
In one embodiment of the invention, the scene data includes video data collected from the venue, a first distance between the user and the intelligent learning device;
the scene data correction module 403 includes:
a face distance calculation module for calculating a second distance between the intelligent learning device and the face of the user based on the video data;
and the distance fusion module is used for fusing the first distance and the second distance into a new first distance.
In one embodiment of the present invention, the scene data correction module 403 further includes:
and the invalid distance marking module is used for marking the first distance as invalid and discarding the first distance if the user is not detected in the video data.
In one embodiment of the present invention, the distance fusion module includes:
the coefficient searching module is used for searching a preset first coefficient and a preset second coefficient;
the first weight adjustment distance calculation module is used for calculating the product between the first distance and the first coefficient to be used as a first weight adjustment distance;
the second weight adjustment distance calculation module is used for calculating the product between the second distance and the second coefficient to be used as a second weight adjustment distance;
And the weight adjustment distance summation module is used for calculating the sum value between the first weight adjustment distance and the second weight adjustment distance to be used as a new first distance.
In one embodiment of the present invention, the scene data correction module 403 further includes:
an invalidation filtering module for performing a filtering operation on the scene data to filter the original scene data that is invalidated;
and the second alarm operation execution module is used for executing the second alarm operation if all the original scene data are invalid.
In one embodiment of the present invention, the eye protection measure generating module 404 includes:
a type determining module for determining a type of the application in the learned dimension;
and the type making module is used for making eye protection measures which are suitable for the learning scene and are matched with the type according to the scene data.
In one embodiment of the invention, the scene data includes a first distance between the user and the intelligent learning device, a first brightness of light and a first color temperature of light within the venue;
the type making module comprises:
the weight query module is used for querying a first weight, a second weight and a third weight which are adapted to the type;
The first weight adjustment brightness calculation module is used for calculating the product between the first brightness and the first weight to be used as first weight adjustment brightness;
the first weight-adjusting color temperature calculation module is used for calculating the product between the first color temperature and the second weight to be used as a first weight-adjusting color temperature;
a third weight adjustment distance calculation module, configured to calculate a product between the first distance and the third weight as a third weight adjustment distance;
the screen brightness mapping module is used for mapping the sum value among the first weight-adjusting brightness, the first weight-adjusting color temperature and the third weight-adjusting distance into second brightness of a screen in the intelligent learning equipment;
and a brightness adjustment measure generating module for generating a brightness adjustment measure based on the second brightness as an eye protection measure applicable to the learning scene, the brightness adjustment measure being used for adjusting the screen to the second brightness.
In one embodiment of the invention, the types include an interaction type, a class type, a reading type;
the second brightness corresponding to the interaction type is higher than the second brightness corresponding to the class type, and the second brightness corresponding to the class type is higher than the second brightness corresponding to the reading type.
In one embodiment of the invention, the scene data includes a first luminance of light and a first color temperature of light within the venue;
the eye protection measure generating module 404 includes:
the weight adaptation module is used for inquiring the fourth weight and the fifth weight adapted to the type;
the second weight adjustment brightness calculation module is used for calculating the product between the first brightness and the fourth weight to be used as second weight adjustment brightness;
the second weight-adjusting color temperature calculation module is used for calculating the product between the first color temperature and the fifth weight to be used as a second weight-adjusting color temperature;
the screen color temperature mapping module is used for mapping the sum value between the second weight-adjusting brightness and the second weight-adjusting color temperature to a second color temperature of a screen in the intelligent learning equipment;
and the color temperature adjusting measure generating module is used for generating a color temperature adjusting measure based on the second color temperature, wherein the color temperature adjusting measure is used for adjusting the screen to the second color temperature as an eye protection measure suitable for the learning scene.
In one embodiment of the invention, the types include an interaction type, a class type, a reading type;
the second color temperature corresponding to the interaction type has smoothness, the second color temperature corresponding to the class type has smoothness, and the second color temperature corresponding to the reading type tends to be warm.
In one embodiment of the present invention, the eye protection measure generating module 404 further includes:
a correlation coefficient calculation module, configured to calculate a correlation coefficient between a second brightness of a screen in the intelligent learning device and a second color temperature of the screen in the intelligent learning device;
an effective determining module, configured to determine that the second luminance and the second color temperature are effective if a difference between the correlation coefficient and a preset reference value is smaller than a preset first threshold;
and the correction module is used for correcting the second brightness and/or the second color temperature if the difference value between the correlation coefficient and the preset reference value is larger than or equal to a preset first threshold value, and calling the correlation coefficient calculation module back.
In one embodiment of the invention, the scene data comprises video data collected for the venue;
the eye protection measure generating module 404 includes:
a threshold value query module for querying a second threshold value adapted to the type;
a total score acquisition module, configured to acquire a total score of the user in the video data, where the total score is used to represent a sitting posture of the user;
the sitting posture protection measure generating module is used for generating a sitting posture protection measure which is used as an eye protection measure applicable to the learning scene if the total score is smaller than or equal to the second threshold value, and the sitting posture protection measure is used for prompting the sitting posture error of the user.
In one embodiment of the invention, the types include an interaction type, a class type, a reading type;
the second threshold value corresponding to the interaction type is larger than the second threshold value corresponding to the class type and the second threshold value corresponding to the reading type.
In one embodiment of the present invention, the eye protection measure generating module 404 includes:
the reference range query module is used for querying a reference range set for the scene data;
a reference range comparison module for comparing the scene data with the reference range;
and the reference range generation module is used for generating eye protection measures applicable to the learning scene if the scene data is out of the reference range.
In one embodiment of the present invention, the reference range generation module includes:
a distance protection measure generating module, configured to generate a distance protection measure as an eye protection measure applicable to the learning scene if the scene data is a first distance between the user and the intelligent learning device and the first distance is outside the reference range, where the distance protection measure is used to prompt that the first distance between the user and the intelligent learning device is not suitable;
And the brightness protection measure generating module is used for generating a brightness protection measure which is used as an eye protection measure applicable to the learning scene if the scene data is the first brightness of the light in the place and the first brightness is outside the reference range, and the brightness protection measure is used for prompting that the first brightness of the light in the place is unsuitable.
In one embodiment of the present invention, the eye protection measure generating module 404 further includes:
a type determining portal for determining a type of the application in a learned dimension;
and the reading protection measure generating module is used for generating reading protection measures as eye protection measures applicable to the learning scene if the type is a reading type, and the reading protection measures are used for adding a layer imitating paper on the application.
The eye protection device provided by the embodiment of the invention can execute the eye protection method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the eye protection method.
Example III
Fig. 5 shows a schematic diagram of the structure of a smart learning device 10 that may be used to implement an embodiment of the present invention. Intelligent learning devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Smart learning devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the intelligent learning device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the intelligent learning apparatus 10 can also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
The various components in the intelligent learning device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the intelligent learning device 10 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as an eye-protection method.
In some embodiments, the eye protection method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the intelligent learning device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the eye-protection method described above may be performed. Alternatively, in other embodiments, processor 11 may be configured to perform the eye protection method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a smart learning device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or a trackball) through which a user can provide input to the intelligent learning device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
Example IV
Embodiments of the present invention also provide a computer program product comprising a computer program which, when executed by a processor, implements an eye protection method as provided by any of the embodiments of the present invention.
Computer program product in the implementation, the computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (27)

1. An eye protection method, characterized by being applied to an intelligent learning device, the method comprising:
determining a learning scene, wherein the learning scene learns for an application displayed by a user facing the intelligent learning equipment in a place;
synchronously collecting scene data related to eyes of the user from the place, the intelligent learning equipment and the user;
correcting the scene data under the constraint of the learning scene;
If the correction is completed, making eye protection measures applicable to the learning scene according to the scene data;
the eye protection measures are performed in the intelligent learning device to protect the eyes of the user in the learning scene.
2. The method of claim 1, wherein the synchronizing gathers scene data relating to the eyes of the user to the venue, the intelligent learning device, and the user, comprising:
creating a buffer zone for a sensor in the intelligent learning device;
synchronously calling the sensor to acquire scene data related to eyes of the user from the place, the intelligent learning equipment and the user;
and writing the scene data into the buffer areas respectively.
3. The method of claim 2, wherein the sensor comprises an attitude sensor, a distance sensor, a light sensor, and a camera;
the creating a buffer zone for the sensor in the intelligent learning device comprises:
creating a buffer zone for the attitude sensor as a first buffer zone;
creating a buffer zone for the distance sensor as a second buffer zone;
creating a buffer area for the light sensor as a third buffer area;
Creating a buffer area for the camera as a fourth buffer area;
the synchronously invoking the sensor to collect scene data related to the eyes of the user from the venue, the intelligent learning device, and the user, comprising:
invoking the gesture sensor to detect the gesture of the intelligent learning device as scene data;
invoking the distance sensor to detect a first distance between the user and the intelligent learning device as scene data;
invoking the light sensor to detect first brightness of light and first color temperature of the light in the place as scene data;
invoking the camera to acquire video data from the place as scene data;
the writing the scene data into the buffer areas respectively includes:
writing the gesture into the first buffer;
writing the first distance into the second buffer;
writing the first brightness and the first color temperature into the third buffer;
writing the video data into the fourth buffer.
4. The method of claim 2, wherein the synchronizing gathers scene data relating to the eyes of the user to the venue, the intelligent learning device, and the user, further comprising:
Respectively initializing a plurality of sensors in the intelligent learning device;
detecting the states of the sensors respectively;
and if initializing at least one sensor fails or the state of at least one sensor is abnormal, executing a first alarm operation.
5. The method of claim 1, wherein the scene data comprises a pose of the intelligent learning device; the correcting the scene data under the constraint of the learning scene includes:
for each piece of reference data, searching a gradient factor curve configured for the reference data, wherein the reference data are other scene data with constraint relation with the gesture, and the gradient factor curve is used for representing the mapping relation between the gesture and the reference data;
searching an adjusting factor of the attitude mapping in the gradient factor curve;
and adjusting the reference data according to the adjustment factor.
6. The method of claim 5, wherein the reference data comprises a first distance between the user and the intelligent learning device, a first brightness of light within the venue, and a first color temperature of light;
The searching the gradient factor curve configured for the reference data aiming at each reference data comprises the following steps:
searching a gradient factor curve configured for the first distance as a first gradient factor curve aiming at the first distance;
searching a gradient factor curve configured for the first brightness aiming at the first brightness as a second gradient factor curve;
searching a gradient factor curve configured for the first color temperature aiming at the first color temperature to serve as a third gradient factor curve;
the searching the adjustment factors of the gesture mapping in the gradient factor curve comprises the following steps:
searching an adjusting factor of the attitude mapping in the first gradient factor curve as a distance factor;
searching an adjusting factor of the attitude mapping in the second gradient factor curve to serve as a brightness factor;
searching an adjusting factor of the attitude mapping in the third gradient factor curve to serve as a color temperature factor;
said adjusting said reference data in accordance with said adjustment factor comprising:
multiplying the first distance by the distance factor as a new first distance;
multiplying the first luminance by the luminance factor as a new first luminance;
The first color Wen Chengyi is the color temperature factor as a new first color temperature.
7. The method of claim 1, wherein the scene data comprises video data collected to the venue; the correcting the scene data under the constraint of the learning scene includes:
performing target detection operation in the video data with the person as a target to obtain a detection result;
if the detection result comprises at least two figures, calculating a total score representing sitting postures for each figure;
and marking the character with the highest total score as the user.
8. The method of claim 7, wherein said calculating a total score for each of said characters indicative of a sitting position comprises:
configuring a first sub-score representing a face level for each of the persons;
configuring a second sub-score representing a face verticality for each of the persons;
configuring a third sub-score representing a degree of body level for each of the persons;
configuring a fourth sub-score representing a degree of body verticality for each of the characters;
a sum value between the first sub-score, the second sub-score, the third sub-score, and the fourth sub-score is calculated for each of the persons as a total score representing a sitting posture.
9. The method of claim 1, wherein the scene data comprises video data collected from the venue, a first distance between the user and the intelligent learning device;
the correcting the scene data under the constraint of the learning scene includes:
calculating a second distance between the intelligent learning device and the face of the user based on the video data;
and fusing the first distance and the second distance into a new first distance.
10. The method of claim 9, wherein the correcting the scene data under the constraints of the learning scene further comprises:
and if the user is not detected in the video data, marking the first distance as invalid and discarding the first distance.
11. The method of claim 9, wherein the fusing the first distance and the second distance to a new first distance comprises:
searching a preset first coefficient and a preset second coefficient;
calculating the product between the first distance and the first coefficient as a first weight adjustment distance;
calculating the product between the second distance and the second coefficient as a second weight adjustment distance;
And calculating the sum value between the first weight adjustment distance and the second weight adjustment distance as a new first distance.
12. The method of claim 5 or 7 or 9, wherein said correcting said scene data under the constraints of said learning scene further comprises:
performing a filtering operation on the scene data to filter the original scene data that is invalid;
and if all the original scene data are invalid, executing a second alarm operation.
13. The method of claim 1, wherein said formulating eye-protection measures applicable to said learning scene based on said scene data comprises:
determining a type of the application in the learned dimension;
and formulating eye protection measures which are suitable for the learning scene and are matched with the type according to the scene data.
14. The method of claim 13, wherein the scene data comprises a first distance between the user and the intelligent learning device, a first brightness of light and a first color temperature of light within the venue;
the making of eye protection measures applicable to the learning scene and adapted to the type according to the scene data comprises the following steps:
Inquiring a first weight, a second weight and a third weight which are matched with the type;
calculating the product between the first brightness and the first weight to be used as first weight adjusting brightness;
calculating the product between the first color temperature and the second weight as a first weight-adjusting color temperature;
calculating the product between the first distance and the third weight to be used as a third weight adjustment distance;
mapping the sum of the first weight-adjusting brightness, the first weight-adjusting color temperature and the third weight-adjusting distance to be second brightness of a screen in the intelligent learning equipment;
a brightness adjustment measure is generated based on the second brightness as an eye-protection measure applicable to the learning scene, the brightness adjustment measure being used to adjust the screen to the second brightness.
15. The method of claim 14, wherein the types include an interaction type, a class type, a reading type;
the second brightness corresponding to the interaction type is higher than the second brightness corresponding to the class type, and the second brightness corresponding to the class type is higher than the second brightness corresponding to the reading type.
16. The method of claim 13, wherein the scene data includes a first brightness of light and a first color temperature of light within the venue;
The making of eye protection measures applicable to the learning scene and adapted to the type according to the scene data comprises the following steps:
querying a fourth weight and a fifth weight which are matched with the type;
calculating the product between the first brightness and the fourth weight as a second weight adjustment brightness;
calculating the product between the first color temperature and the fifth weight as a second tuning color temperature;
mapping the sum value between the second weight-adjusting brightness and the second weight-adjusting color temperature to a second color temperature of a screen in the intelligent learning device;
generating a color temperature adjustment measure based on the second color temperature as an eye protection measure applicable to the learning scene, the color temperature adjustment measure being used to adjust the screen to the second color temperature.
17. The method of claim 16, wherein the types include an interaction type, a class type, a reading type;
the second color temperature corresponding to the interaction type has smoothness, the second color temperature corresponding to the class type has smoothness, and the second color temperature corresponding to the reading type tends to be warm.
18. The method according to claim 14 or 16, wherein said formulating eye-protection measures adapted to the learning scene and adapted to the type from the scene data further comprises:
Calculating a correlation coefficient between a second brightness of a screen in the intelligent learning device and a second color temperature of the screen in the intelligent learning device;
if the difference value between the correlation coefficient and the preset reference value is smaller than a preset first threshold value, determining that the second brightness and the second color temperature are effective;
and if the difference value between the correlation coefficient and the preset reference value is greater than or equal to a preset first threshold value, correcting the second brightness and/or the second color temperature, and returning to execute the calculation of the correlation coefficient between the second brightness of the screen in the intelligent learning device and the second color temperature of the screen in the intelligent learning device.
19. The method of claim 13, wherein the scene data comprises video data collected for the venue;
the making of eye protection measures applicable to the learning scene and adapted to the type according to the scene data comprises the following steps:
querying a second threshold adapted to the type;
acquiring a total score of the user in the video data, wherein the total score is used for representing the sitting posture of the user;
and if the total score is smaller than or equal to the second threshold value, generating a sitting posture protection measure which is used as an eye protection measure applicable to the learning scene and used for prompting the sitting posture error of the user.
20. The method of claim 19, wherein the types include an interaction type, a class type, a reading type;
the second threshold value corresponding to the interaction type is larger than the second threshold value corresponding to the class type and the second threshold value corresponding to the reading type.
21. The method of claim 1, wherein said formulating eye-protection measures applicable to said learning scene based on said scene data comprises:
inquiring a reference range set for the scene data;
comparing the scene data with the reference range;
and if the scene data is out of the reference range, generating eye protection measures applicable to the learning scene.
22. The method of claim 21, wherein generating eye protection for the learning scene if the scene data is outside the reference range comprises:
if the scene data is a first distance between the user and the intelligent learning equipment and the first distance is outside the reference range, generating a distance protection measure which is used as an eye protection measure applicable to the learning scene and used for prompting that the first distance between the user and the intelligent learning equipment is unsuitable;
And if the scene data is the first brightness of the light in the place and the first brightness is outside the reference range, generating a brightness protection measure which is used as an eye protection measure applicable to the learning scene and used for prompting that the first brightness of the light in the place is unsuitable.
23. The method of claim 13 or 21, wherein the formulating eye protection measures applicable to the learning scene based on the scene data further comprises:
determining a type of the application in the learned dimension;
and if the type is a reading type, generating a reading protection measure as an eye protection measure applicable to the learning scene, wherein the reading protection measure is used for adding a layer imitating paper on the application.
24. An eye protection device for use in an intelligent learning apparatus, the device comprising:
the learning scene determining module is used for determining a learning scene, wherein the learning scene is used for learning for an application displayed by the intelligent learning equipment in a place for a user;
the scene data acquisition module is used for synchronously acquiring scene data related to eyes of the user to the place, the intelligent learning equipment and the user;
A scene data correction module, configured to correct the scene data under the constraint of the learning scene;
the eye protection measure generating module is used for formulating eye protection measures applicable to the learning scene according to the scene data if the correction is completed;
and the eye protection measure execution module is used for executing the protection measure in the intelligent learning equipment so as to protect eyes of the user in the learning scene.
25. An intelligent learning device, characterized in that the intelligent learning device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the eye protection method of any one of claims 1-23.
26. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for causing a processor to implement the eye protection method of any one of claims 1-23 when executed.
27. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the eye protection method according to any of claims 1-23.
CN202210557786.1A 2022-05-19 2022-05-19 Eye protection method, device, equipment and storage medium Pending CN117132428A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210557786.1A CN117132428A (en) 2022-05-19 2022-05-19 Eye protection method, device, equipment and storage medium
PCT/CN2023/093780 WO2023221884A1 (en) 2022-05-19 2023-05-12 Eye protection method, apparatus, and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210557786.1A CN117132428A (en) 2022-05-19 2022-05-19 Eye protection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117132428A true CN117132428A (en) 2023-11-28

Family

ID=88834600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210557786.1A Pending CN117132428A (en) 2022-05-19 2022-05-19 Eye protection method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN117132428A (en)
WO (1) WO2023221884A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271014B (en) * 2017-07-18 2022-04-22 华为终端有限公司 Method and equipment for adjusting screen brightness
CN109660654A (en) * 2018-10-30 2019-04-19 努比亚技术有限公司 A kind of eyeshield based reminding method, flexible screen terminal and computer readable storage medium
CN111966224A (en) * 2020-08-27 2020-11-20 百度在线网络技术(北京)有限公司 Eye protection mode prompting method and device, electronic equipment and storage medium
CN112613419A (en) * 2020-12-26 2021-04-06 西安科锐盛创新科技有限公司 Wisdom education is with study monitor system
CN112987922A (en) * 2021-02-23 2021-06-18 百度在线网络技术(北京)有限公司 Device control method and device for protecting eyes and electronic device

Also Published As

Publication number Publication date
WO2023221884A1 (en) 2023-11-23

Similar Documents

Publication Publication Date Title
US11551377B2 (en) Eye gaze tracking using neural networks
US11231777B2 (en) Method for controlling device on the basis of eyeball motion, and device therefor
US10776954B2 (en) Real-world anchor in a virtual-reality environment
US8810413B2 (en) User fatigue
US9892545B2 (en) Focus guidance within a three-dimensional interface
US20210041945A1 (en) Machine learning based gaze estimation with confidence
CN108170279B (en) Eye movement and head movement interaction method of head display equipment
KR102212209B1 (en) Method, apparatus and computer readable recording medium for eye gaze tracking
WO2020125499A1 (en) Operation prompting method and glasses
JP5602155B2 (en) User interface device and input method
US9093012B2 (en) Operation mode switching method and electronic device
US20150229838A1 (en) Photo composition and position guidance in a camera or augmented reality system
US11087136B2 (en) Scene classification
CN105934730A (en) Automated content scrolling
CN111448568B (en) Environment-based application presentation
TW201303640A (en) Total field of view classification for head-mounted display
CN104571474A (en) Method and device for adaptively adjusting contents displayed on terminal screen
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
CN104076481A (en) Method for automatically setting focus and therefor
US11670157B2 (en) Augmented reality system
CN106873853A (en) Screen display method and device
US10997828B2 (en) Sound generation based on visual data
CN107544660B (en) Information processing method and electronic equipment
CN112700568A (en) Identity authentication method, equipment and computer readable storage medium
CN105446580A (en) Control method and portable electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination