CN113469058A - Method and mobile device for preventing myopia - Google Patents

Method and mobile device for preventing myopia Download PDF

Info

Publication number
CN113469058A
CN113469058A CN202110751416.7A CN202110751416A CN113469058A CN 113469058 A CN113469058 A CN 113469058A CN 202110751416 A CN202110751416 A CN 202110751416A CN 113469058 A CN113469058 A CN 113469058A
Authority
CN
China
Prior art keywords
user
posture
viewing
correct
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110751416.7A
Other languages
Chinese (zh)
Inventor
贺曙
高炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Future Technology Co ltd
Original Assignee
Guangdong Future Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Future Technology Co ltd filed Critical Guangdong Future Technology Co ltd
Priority to CN202110751416.7A priority Critical patent/CN113469058A/en
Publication of CN113469058A publication Critical patent/CN113469058A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method and a mobile device for preventing myopia, which are used for reminding a user of correct eye usage to prevent myopia, and comprise the following steps: when a screen is lightened, tracking a face image of a user through a front-facing camera, and calibrating coordinates of each characteristic point on the face image; determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image; if not, determining the duration of time that the user maintains the incorrect viewing posture; when the duration is greater than a first threshold, sending a first prompt message to the user, wherein the first prompt message is used for prompting the user to keep a correct eye using posture.

Description

Method and mobile device for preventing myopia
[ technical field ] A method for producing a semiconductor device
The invention relates to the field of eye detection, in particular to a method and mobile equipment for preventing myopia.
[ background of the invention ]
In the development process of children, the frequency of reading, writing and using various electronic devices of the children is high, myopia, strabismus, humpback and the like are common diseases of students at school, the main reason is that the students have bad sitting postures in the long-term learning process, bad habits of the students are formed for a long time, and the bad habits are difficult to solve by the reminding of teachers. In particular, the incidence of myopia has been increasing in recent years, and the incidence of myopia tends to be lower in age. In the process of learning, students keep correct sitting postures, proper reading and watching angles, proper reading light intensity and proper eye use, and the method is very important for preventing myopia. Various devices or apparatuses have been proposed in the art for correcting bad eye habits of students, such as sitting posture correctors and various table lamps, to prevent myopia.
It should be noted that, at present, more and more shortsightedness is caused by using a mobile device, such as a mobile phone, an ipad, and the like, and therefore, how to correct the bad eye habits of the user when the user uses the mobile device is a problem that needs to be solved at present.
[ summary of the invention ]
In order to solve the problem of myopia caused by poor eye use habits of users when the users use mobile equipment, the invention provides a method for preventing myopia and the mobile equipment.
In order to solve the technical problems, the invention provides the following technical scheme: a method of preventing myopia, the method being applied to a mobile device, comprising the steps of: when a screen is lightened, tracking a face image of a user through a front-facing camera, and calibrating coordinates of each characteristic point on the face image; determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image; if not, determining the duration of time that the user maintains the incorrect viewing posture; when the duration is greater than a first threshold, sending a first prompt message to the user, wherein the first prompt message is used for prompting the user to keep a correct eye using posture.
Preferably, the first prompt message is used for the unhealthy viewing distance of the user; or the first prompt message is used for prompting that the user is unhealthy in viewing angle; or the first prompt message is used for prompting that the user is unhealthy in watching posture.
Preferably, when the first prompt message is used to prompt that the viewing distance of the user is unhealthy, the determining whether the user conforms to a correct viewing posture according to the coordinates of the feature points on the facial image includes: determining the comprehensive imaging size of the characteristic part of the user according to the coordinates of the characteristic points; calculating the viewing distance according to the comprehensive imaging size of the characteristic part and preset standard data; when the viewing distance is greater than the eye-safe distance, determining that the user is not in compliance with a correct viewing posture.
Preferably, the characteristic portion is a pupil, the comprehensive imaging size of the characteristic portion is a pupil distance, and the determining the comprehensive imaging size of the characteristic portion of the user includes: calculating a pupillary distance L of both eyes of the user by the following formula:
Figure RE-GDA0003245849630000021
the coordinates of the left pupil are (x1, y1), and the coordinates of the right pupil are (x2, y 2).
The calculating the viewing distance comprises: calculating a transverse viewing angle and a longitudinal viewing angle according to the deviation of the left pupil and the right pupil from the picture origin based on the following formula: α ═ l (x2+ x1) |; β ═ l (y2+ y1) |; wherein, the alpha is related to the transverse visual angle, the beta is related to the longitudinal visual angle, and the picture center point is a coordinate system origin calibrated in advance; when the alpha is smaller than a second threshold and the beta is smaller than a third threshold, calculating an observation angle according to the transverse visual angle and the longitudinal visual angle; calculating the viewing distance according to the observation angle and the pupil distance of the two eyes of the user by the following formula: m is L/cos delta; wherein δ is used to represent the viewing angle, and L is used to represent the pupillary distance between the two eyes of the user.
Preferably, when the α is greater than the second threshold or the β is greater than the third threshold, a first prompt message for prompting the user that the viewing angle is not healthy is sent to the user.
Preferably, when the first prompt message is used to prompt the user that the viewing posture is not healthy, the determining whether the user conforms to a correct viewing posture includes: connecting the pupil center point to the human center point in the human face image and extending the pupil center point in two directions to segment the human face image; counting the sum of skin color pixels according to the segmented face image, and respectively determining a left face area and a right face area; calculating the degree of strabismus according to the formula d ═ L-R)/(L + R), wherein d is used for expressing the degree of strabismus, L is used for expressing the left face area, and R is used for expressing the right face area; determining that the user is not in compliance with a correct viewing posture when the degree of squinting is greater than a fourth threshold.
Preferably, when the first prompt message is used to prompt the user that the viewing posture is not healthy, the determining whether the user conforms to a correct viewing posture includes: determining that the user is watching a mobile device according to the facial image of the user; deducing the spatial posture of the face according to the relative spatial positions of the mobile phone and the face and the horizontal posture data of a nine-axis gyroscope in the mobile equipment; and if the horizontal posture data is in the range of the preset interval and the face space posture is in the preset interval, judging whether the user is in a side-lying posture or a supine posture for watching, and determining that the user does not conform to the correct watching posture.
Preferably, the first prompting message includes: displaying over a screen of the mobile device; or, covering a screen of the mobile device; or sending information to a preset telephone number.
The present invention also provides a mobile device for preventing myopia, comprising: the camera is used for tracking a face image of a user and calibrating coordinates of each characteristic point on the face image when the screen is lightened; the processor is used for determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image; if not, determining the duration of time that the user maintains the incorrect viewing posture; and the display module is used for sending a first prompt message to the user when the duration is greater than a first threshold, wherein the first prompt message is used for prompting the user to keep a correct eye using posture.
The embodiment of the application provides a method for preventing myopia, which specifically comprises the following steps: when a screen is lightened, tracking a face image of a user through a front-facing camera, and calibrating coordinates of each characteristic point on the face image; determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image; if not, determining the duration of time that the user maintains the incorrect viewing posture; when the duration is greater than a first threshold, sending a first prompt message to the user, wherein the first prompt message is used for prompting the user to keep a correct eye using posture. When it is determined that the user is not using the eye correctly during use of the mobile device, the user is reminded to maintain a correct eye-using posture to prevent myopia.
[ description of the drawings ]
FIG. 1 is a schematic flow chart of a method for preventing myopia according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating a process of determining whether a user conforms to a correct viewing posture according to an embodiment of the present application.
[ detailed description ] embodiments
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problem of myopia caused by poor eye use habits of a user when the user uses a mobile device, the invention provides a method for preventing myopia and the mobile device, which are used for prompting the user to correct the poor eye use habits to prevent myopia when the user uses the mobile device.
Referring to fig. 1, a flowchart of a method for preventing myopia provided by the present invention specifically includes the following steps:
101. when a screen is lightened, tracking a face image of a user through a front-facing camera, and calibrating coordinates of each characteristic point on the face image;
when a user uses the mobile equipment and the screen of the mobile equipment is displayed to be bright, the face image of the user is tracked through the front camera of the mobile equipment. After the front-facing camera captures the face image, calibrating the coordinates of each characteristic point on the face image. The feature points in the face image may be pupil, corner feature points, eye center feature points, and the like, and are not limited herein.
In addition, there are various ways to calibrate the coordinates of each feature point on the face image, for example, preprocessing the face image, extracting the corner points in the preprocessed face image, filtering and combining the corner points to obtain the connected regions of the corner points; and extracting the centroid of the connected region of the corner points, wherein the specific extraction mode can be as follows: calculating the brightness difference degree of the current pixel point and the surrounding pixel points according to a predefined 3X3 template, and extracting the pixel points with the brightness difference degree being more than or equal to a first threshold value as corner points; the 3X3 template is formed by taking a current pixel point as a center and the left, right, upper, lower, upper left, upper right, lower left and lower right pixel points of the current pixel point. And matching the extracted centroid with a face template, calculating the matching probability of the centroid with the face template, and positioning an area formed by the centroids with the matching probability greater than or equal to a predetermined value as a candidate face area, wherein the face template can be a rectangular template and at least comprises three points, for example, each point is represented by (P, w, h), wherein P is a two-dimensional coordinate of the point, w is a maximum horizontal range allowed to appear on the left and right of the point, and h is a maximum vertical range allowed to appear on the upper and lower sides of the point.
Or, coordinates of each feature point on the face image can be calibrated through AI face recognition, for example, 2.14 points, 5 points, and 6 point labels, the most critical points of the face are 5 points, which are respectively a left mouth corner, a right mouth corner, a center of two eyes, and a nose, the 5 key points belong to key points inside the face, and the pose of the face can be calculated according to the key points. Of course, there are schemes in which 4 points and 6 points are labeled at an early stage. FRGC-V2(Face Recognition Grand Challenge version2.0) published in 2005 noted 5 key points in total for eyes, nose, mouth, and chin. The Caltech 10000 Web Faces dataset published in 2007 labeled 4 key points for both eyes, nose and mouth. 6 key points of eyes, nose and lips are marked in the AFW data set of 2013, wherein the lips have 3 points. The MTFL/MAFL dataset published in 2014 labeled 5 keypoints of eyes, nose, and 2 mouth corners. Or 68-point labeling is the most common labeling scheme nowadays, which was proposed in the Xm2vtsdb dataset in 1999 in the early days, and both the 300W dataset and the Xm2VTS dataset adopt 68-key point schemes, which are adopted in the Dlib algorithm in OpenCV.
There are also different versions of the label of 68 key points, and here we introduce the most common version in Dlib, which divides the face key points into internal key points and contour key points, where the internal key points contain 51 key points in total, and the contour key points contain 17 key points. 68 individual face key point labels adopted by the Dlib can be seen in the figure, 5 key points exist in a single-side eyebrow, the sampling is uniform from the left boundary to the right boundary, and the total number is 5 × 2-10. The eye is divided into 6 key points, namely a left boundary and a right boundary, and the upper eyelid and the lower eyelid are uniformly sampled, and 6 × 2 is 12 in total. The lips are divided into 20 key points, except 2 of the corners of the mouth, which are divided into an upper lip and a lower lip. The outer boundaries of the upper lip and the lower lip are respectively and uniformly sampled by 5 points, and the inner boundaries of the upper lip and the lower lip are respectively and uniformly sampled by 3 points, and the total number is 20. The labeling of the nose increases 4 key points in the nose bridge part, and the nose tip part uniformly collects 5 key points, which are 9 key points. The face contour uniformly samples 17 keypoints. If the forehead portion is also added, more, say 81, key points can be obtained. Therefore, the technology for calibrating each feature point on the face image is the prior art, and details are not repeated here.
Therefore, there are various ways to obtain the coordinates of each feature point on the face image, and the method is not limited in particular.
102. Determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image;
103. if not, determining the duration of time that the user maintains the incorrect viewing posture;
after the coordinates of each feature point on the face image are determined, whether the user accords with a correct watching posture is determined according to the coordinates of each coordinate point, wherein the correct watching posture can comprise that the watching distance of the user is correct, the watching angle is correct and the watching posture is correct. Referring to fig. 2, a schematic flow chart of determining whether a user is in a correct viewing posture according to the present invention includes:
1021. determining the comprehensive imaging size of the characteristic part of the user according to the coordinates of the characteristic points
In practical application, one or more groups of feature points in the face image may be pupils, or feature points equivalent to pupils, such as corner feature points, eye center feature points, and the like. In the application, the characteristic part is taken as an example of the pupil, the comprehensive imaging size of the characteristic part is the pupil distance,
specifically, the pupil distance L of the two eyes of the user can be calculated by the following formula:
Figure RE-GDA0003245849630000061
the coordinates of the left pupil are (x1, y1), and the coordinates of the right pupil are (x2, y 2).
1022. Calculating the viewing distance according to the comprehensive imaging size of the characteristic part and preset standard data;
after determining the comprehensive imaging size of the user feature, that is, the pupil distance between the two eyes, a transverse viewing angle and a longitudinal viewing angle are calculated according to the deviation between the left pupil and the right pupil from the original point of the picture, specifically, the transverse viewing angle and the longitudinal viewing angle are calculated by the following formulas:
α=|(x2+x1)|;
β=|(y2+y1)|;
wherein, the α is used for representing the correlation with the transverse viewing angle, and the β is used for representing the correlation with the longitudinal viewing angle.
Optionally, the lateral viewing angle and the longitudinal viewing angle may also be calculated by:
α=|(x2+x1)|;
β=|(y2+y1)|;
wherein, the alpha is related to the transverse visual angle, the beta is related to the longitudinal visual angle, and the picture center point is a coordinate system origin calibrated in advance.
When the alpha is less than a second threshold and the beta is less than a third threshold,
comprises the following steps:
and calculating the viewing distance according to the observation angle and the pupil distance of the two eyes of the user.
Calculating the viewing distance by the following formula: m ═ L ═ cos δ; wherein δ is used to represent the viewing angle, and L is used to represent the pupillary distance between the two eyes of the user.
1023. When the viewing distance is greater than the eye-safe distance, determining that the user is not in compliance with a correct viewing posture.
And after the viewing distance is obtained, comparing the viewing distance with the eye safety distance, and when the viewing distance is greater than the eye safety distance, determining that the user does not conform to a correct viewing posture.
Optionally, in this embodiment of the present application, the determining whether the user conforms to the correct viewing posture may further include the following several ways:
when the alpha is larger than the second threshold or the beta is larger than the third threshold, sending a first prompt message for prompting the user that the viewing angle is not healthy to the user.
Secondly, connecting the pupil center point to the human middle point in the face image and extending the pupil center point in two directions to segment the face image; counting the sum of skin color pixels according to the segmented face image, and respectively determining a left face area and a right face area; calculating the degree of strabismus according to the formula d ═ L-R)/(L + R), wherein d is used for expressing the degree of strabismus, L is used for expressing the left face area, and R is used for expressing the right face area; determining that the user is not in compliance with a correct viewing posture when the degree of squinting is greater than a fourth threshold.
In addition, in the smart phone, the gyroscope is a sensor and is mainly used for detecting the gesture of the smart phone, users can not play motion sensing games, and anti-shake of the smart phone is also needed when the smart phone takes pictures; in addition, the mobile phone navigation sometimes needs to be used, and better positioning can be achieved. The earliest gyroscopes are all mechanical, have large volume, really have gyros rotating at high speed in the gyroscopes, and mechanical objects have high requirements on processing precision and are afraid of vibration, so that the precision of a navigation system based on the mechanical gyroscopes is not high all the time. In smart phones today, the gyro sensor has evolved into a small chip, which is an upgraded version of an acceleration sensor, and belongs to a sensor. The acceleration sensor can monitor and sense linear motion in a certain axial direction, and the gyroscope can detect and sense linear motion in a 3D space. Therefore, the direction can be recognized, the posture can be determined, and the angular speed can be calculated.
In view of this, the embodiment of the present application may further determine whether the user conforms to a correct viewing posture through a gyroscope, and specifically, determine that the user is viewing the mobile device according to the face image of the user; deducing the spatial posture of the face according to the relative spatial positions of the mobile phone and the face and the horizontal posture data of a nine-axis gyroscope in the mobile equipment; and if the horizontal posture data is in the range of the preset interval and the face space posture is in the preset interval, judging whether the user is in a side-lying posture or a supine posture for watching, and determining that the user does not conform to the correct watching posture.
In summary, the embodiments of the present application provide various methods for determining whether a user is in a correct viewing posture, which may combine various ways to be integrated together in the mobile device, so as to determine more accurately that the viewing posture of the user is incorrect, and prompt the user to prevent myopia with eye health.
104. When the duration is greater than a first threshold, sending a first prompt message to the user, wherein the first prompt message is used for prompting the user to keep a correct eye using posture.
When the duration is greater than a first threshold, sending a first prompting message to the user to prompt the user to maintain a correct eye-using posture. Specifically, the first prompt message may be that the first prompt message is displayed above a screen of the mobile device, or that the screen of the mobile device is covered to prompt the user that the distance from the screen is less than a safe eye distance; or, the screen is turned off and the first prompt message is broadcasted in a voice mode; or sending the first prompt message to a preset telephone number to inform. For example, when the viewing distance is too close, the first prompt message is an unhealthy early warning signal of the viewing distance; or when the viewing angle is larger than the second threshold value, the first prompt message is an unhealthy early warning signal of the viewing angle; or when the squint degree is larger than a third threshold value, determining that the user does not conform to a correct watching posture, and then the first prompt message is an unhealthy early warning signal of the watching posture.
In addition, this application embodiment still provides a mobile device for preventing myopia, includes: the camera is used for tracking a face image of a user and calibrating coordinates of each characteristic point on the face image when the screen is lightened; the processor is used for determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image; if not, determining the duration of time that the user maintains the incorrect viewing posture; and the display module is used for sending a first prompt message to the user when the duration is greater than a first threshold, wherein the first prompt message is used for prompting the user to keep a correct eye using posture.
The above description is only one or several embodiments of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method of preventing myopia, the method being applied to a mobile device, comprising the steps of:
when a screen is lightened, tracking a face image of a user through a front-facing camera, and calibrating coordinates of each characteristic point on the face image;
determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image;
if not, determining the duration of time that the user maintains the incorrect viewing posture;
when the duration is greater than a first threshold, sending a first prompt message to the user, wherein the first prompt message is used for prompting the user to keep a correct eye using posture.
2. The method of claim 1,
the first prompt message is used for the unhealthy viewing distance of the user;
or the first prompt message is used for prompting that the user is unhealthy in viewing angle;
or the first prompt message is used for prompting that the user is unhealthy in watching posture.
3. The method according to claim 2, wherein when the first prompt message is used to prompt the user that the viewing distance is unhealthy, the determining whether the user conforms to a correct viewing posture according to coordinates of feature points on the facial image comprises:
determining the comprehensive imaging size of the characteristic part of the user according to the coordinates of the characteristic points;
calculating the viewing distance according to the comprehensive imaging size of the characteristic part and preset standard data;
when the viewing distance is greater than the eye-safe distance, determining that the user is not in compliance with a correct viewing posture.
4. The method of claim 3, wherein the feature is a pupil, wherein the composite imaged size of the feature is a pupil distance, and wherein determining the composite imaged size of the feature of the user comprises:
calculating a pupillary distance L of both eyes of the user by the following formula:
Figure FDA0003146433080000011
the coordinates of the left pupil are (x1, y1), and the coordinates of the right pupil are (x2, y 2).
5. The method of claim 4, wherein the calculating the viewing distance comprises:
calculating a transverse viewing angle and a longitudinal viewing angle according to the deviation of the left pupil and the right pupil from the picture origin based on the following formula:
α=|(x2+x1)|;
β=|(y2+y1)|;
wherein, the alpha is related to the transverse visual angle, the beta is related to the longitudinal visual angle, and the picture center point is a coordinate system origin calibrated in advance;
when the alpha is smaller than a second threshold and the beta is smaller than a third threshold, calculating an observation angle according to the transverse visual angle and the longitudinal visual angle;
calculating the viewing distance according to the observation angle and the pupil distance of the two eyes of the user by the following formula:
M=L/cosδ;
wherein δ is used to represent the viewing angle, and L is used to represent the pupillary distance between the two eyes of the user.
6. The method of claim 5, wherein when the α is greater than the second threshold or the β is greater than the third threshold, a first prompt message for prompting the user that the viewing angle is not healthy is sent to the user.
7. The method of claim 2, wherein the first prompting message is used to prompt the user that the viewing posture is unhealthy, and wherein the determining whether the user is in compliance with a correct viewing posture comprises:
connecting the pupil center point to the human center point in the human face image and extending the pupil center point in two directions to segment the human face image;
counting the sum of skin color pixels according to the segmented face image, and respectively determining a left face area and a right face area;
calculating the degree of strabismus according to the formula d ═ L-R)/(L + R), wherein d is used for expressing the degree of strabismus, L is used for expressing the left face area, and R is used for expressing the right face area;
determining that the user is not in compliance with a correct viewing posture when the degree of squinting is greater than a fourth threshold.
8. The method of claim 2, wherein the first prompting message is used to prompt the user that the viewing posture is unhealthy, and wherein the determining whether the user is in compliance with a correct viewing posture comprises:
determining that the user is watching a mobile device according to the facial image of the user;
deducing the spatial posture of the face according to the relative spatial positions of the mobile phone and the face and the horizontal posture data of a nine-axis gyroscope in the mobile equipment;
and if the horizontal posture data is in the range of the preset interval and the face space posture is in the preset interval, judging whether the user is in a side-lying posture or a supine posture for watching, and determining that the user does not conform to the correct watching posture.
9. The method of claim 1, wherein the first prompting message comprises:
displaying the data on the screen of the mobile terminal;
or the like, or, alternatively,
covering a screen of the mobile terminal;
or the like, or, alternatively,
and sending information to a preset telephone number.
10. A mobile device for preventing myopia, comprising:
the camera is used for tracking a face image of a user and calibrating coordinates of each characteristic point on the face image when the screen is lightened;
the processor is used for determining whether the user accords with a correct watching posture or not according to the coordinates of each feature point on the face image; if not, determining the duration of time that the user maintains the incorrect viewing posture;
and the display module is used for sending a first prompt message to the user when the duration is greater than a first threshold, wherein the first prompt message is used for prompting the user to keep correct eye posture.
CN202110751416.7A 2021-07-02 2021-07-02 Method and mobile device for preventing myopia Pending CN113469058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110751416.7A CN113469058A (en) 2021-07-02 2021-07-02 Method and mobile device for preventing myopia

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110751416.7A CN113469058A (en) 2021-07-02 2021-07-02 Method and mobile device for preventing myopia

Publications (1)

Publication Number Publication Date
CN113469058A true CN113469058A (en) 2021-10-01

Family

ID=77877622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110751416.7A Pending CN113469058A (en) 2021-07-02 2021-07-02 Method and mobile device for preventing myopia

Country Status (1)

Country Link
CN (1) CN113469058A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1845612A (en) * 2005-04-08 2006-10-11 三星电子株式会社 Three-dimensional display device and method using hybrid position-tracking system
US20110310097A1 (en) * 2009-01-21 2011-12-22 Nikon Corporation Image processing apparatus, image processing method, recording method, and recording medium
CN102662476A (en) * 2012-04-20 2012-09-12 天津大学 Gaze estimation method
CN104715234A (en) * 2014-12-31 2015-06-17 湘潭大学 Side view detecting method and system
CN106919250A (en) * 2015-12-28 2017-07-04 ***通信集团公司 A kind of based reminding method and device
CN107066089A (en) * 2017-03-08 2017-08-18 北京互讯科技有限公司 A kind of mobile phone eye posture guard method based on computer vision technique
CN107426423A (en) * 2017-07-17 2017-12-01 深圳天珑无线科技有限公司 Reminding method, terminal and the computer-readable storage medium of posture are used based on terminal
CN110381305A (en) * 2019-07-31 2019-10-25 南方医科大学南方医院 Naked eye 3D's removes crosstalk method, system, storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1845612A (en) * 2005-04-08 2006-10-11 三星电子株式会社 Three-dimensional display device and method using hybrid position-tracking system
US20110310097A1 (en) * 2009-01-21 2011-12-22 Nikon Corporation Image processing apparatus, image processing method, recording method, and recording medium
CN102662476A (en) * 2012-04-20 2012-09-12 天津大学 Gaze estimation method
CN104715234A (en) * 2014-12-31 2015-06-17 湘潭大学 Side view detecting method and system
CN106919250A (en) * 2015-12-28 2017-07-04 ***通信集团公司 A kind of based reminding method and device
CN107066089A (en) * 2017-03-08 2017-08-18 北京互讯科技有限公司 A kind of mobile phone eye posture guard method based on computer vision technique
CN107426423A (en) * 2017-07-17 2017-12-01 深圳天珑无线科技有限公司 Reminding method, terminal and the computer-readable storage medium of posture are used based on terminal
CN110381305A (en) * 2019-07-31 2019-10-25 南方医科大学南方医院 Naked eye 3D's removes crosstalk method, system, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US11900729B2 (en) Eyewear including sign language to speech translation
CN106598221B (en) 3D direction of visual lines estimation method based on eye critical point detection
JP5728009B2 (en) Instruction input device, instruction input method, program, recording medium, and integrated circuit
CN110363867B (en) Virtual decorating system, method, device and medium
EP3339943A1 (en) Method and system for obtaining optometric parameters for fitting eyeglasses
JP4692526B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
CN109343700B (en) Eye movement control calibration data acquisition method and device
CN110708533B (en) Visual assistance method based on augmented reality and intelligent wearable device
WO2008007781A1 (en) Visual axis direction detection device and visual line direction detection method
KR20040107890A (en) Image slope control method of mobile phone
CN110998666B (en) Information processing device, information processing method, and program
CN109766007A (en) A kind of the blinkpunkt compensation method and compensation device, display equipment of display equipment
WO2023124693A1 (en) Augmented reality scene display
TWI669703B (en) Information display method and information display apparatus suitable for multi-person viewing
CN114078278A (en) Method and device for positioning fixation point, electronic equipment and storage medium
JP2000097676A (en) Method and apparatus for detecting direction of face
US11335090B2 (en) Electronic device and method for providing function by using corneal image in electronic device
CN112232128A (en) Eye tracking based method for identifying care needs of old disabled people
CN113570645A (en) Image registration method, image registration device, computer equipment and medium
JP2009104426A (en) Interactive sign system
CN109194952B (en) Head-mounted eye movement tracking device and eye movement tracking method thereof
CN113989831A (en) Myopia prevention and control method, device, terminal equipment and storage medium
CN116324891A (en) Eye wear distortion correction
JP2006285531A (en) Detection device for eye direction, detecting method for eye direction, program for executing the same detecting method for eye direction by computer
CN113469058A (en) Method and mobile device for preventing myopia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination