CN105786430B - Information processing method and electronic equipment - Google Patents

Information processing method and electronic equipment Download PDF

Info

Publication number
CN105786430B
CN105786430B CN201610105775.4A CN201610105775A CN105786430B CN 105786430 B CN105786430 B CN 105786430B CN 201610105775 A CN201610105775 A CN 201610105775A CN 105786430 B CN105786430 B CN 105786430B
Authority
CN
China
Prior art keywords
user
visual
display unit
acquiring
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610105775.4A
Other languages
Chinese (zh)
Other versions
CN105786430A (en
Inventor
杨大业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201610105775.4A priority Critical patent/CN105786430B/en
Publication of CN105786430A publication Critical patent/CN105786430A/en
Application granted granted Critical
Publication of CN105786430B publication Critical patent/CN105786430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Eye Examination Apparatus (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information processing method and electronic equipment, wherein the information processing method comprises the following steps: acquiring visual condition parameters of a user; acquiring the visual distance and/or visual angle of a user relative to a display unit; determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle; and adjusting the light field of the display unit according to the pre-filtering parameter.

Description

Information processing method and electronic equipment
Technical Field
The present invention relates to the field of information technologies, and in particular, to an information processing method and an electronic device.
Background
With the development of information technology, the variety of electronic devices is increasing, and the application range is also wider. People can watch the image-text information by using various electronic devices such as mobile phones, tablet computers, notebook computers, televisions and the like. Many people have visual defects due to congenital or acquired nature. For example, a person is a myopic eye, a person is a hyperopic eye, a person is an astigmatic eye, and a person is glaucoma. Therefore, when people watch the display of the electronic equipment, even if the electronic equipment adopts normal display, people with normal vision may consider the display to be clear display, and the visual problem may exist for people with visual defects.
Of course, when the user does not watch the display of the electronic device, the problem of poor eyesight or aggravated eyesight may also be caused by the problem of incorrect watching posture and the like.
Disclosure of Invention
In view of the above, it is desirable to provide an information processing method and an electronic device, which at least partially solve the problem of poor viewing effect, such as the viewing clarity of a visually impaired person.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a first aspect of an embodiment of the present invention provides an information processing method, where the method includes:
acquiring visual condition parameters of a user;
acquiring the visual distance and/or visual angle of a user relative to a display unit;
determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
and adjusting the light field of the display unit according to the pre-filtering parameter.
Based on the above scheme, the acquiring of the visual condition parameter of the user includes:
outputting vision test information;
collecting user feedback information based on the vision test information;
and determining and storing the visual condition parameters of the user according to the user feedback information and the visual distance and/or visual angle of the user relative to the display unit during the test.
Based on the above scheme, the acquiring of the visual condition parameter of the user includes:
collecting user identification characteristics;
and inquiring the pre-stored visual condition parameters according to the user identification characteristics.
Based on the above scheme, the collecting the user identification features includes:
the facial features of the user are collected.
Based on the above scheme, the acquiring the visual distance and/or the visual angle of the user relative to the display unit includes:
acquiring a user image by using an infrared camera;
and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
Based on above-mentioned scheme, utilize infrared camera to gather user's image, include:
acquiring the user image at regular time;
the method further comprises the following steps:
determining whether a user is watching the display unit according to the user image;
the analyzing the user image and determining the visual distance and/or the visual angle of the user relative to the display unit comprises:
and if the user watches the display unit, analyzing the user image to determine the sight distance and/or the visual angle.
A second aspect of an embodiment of the present invention provides an electronic device, including:
the first acquisition unit is used for acquiring visual condition parameters of a user;
a second acquisition unit for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit for determining a pre-filtering parameter based on the vision condition parameter and the viewing distance and/or the viewing angle;
and the adjusting unit is used for adjusting the light field of the display unit according to the pre-filtering parameter.
Based on the scheme, the first obtaining unit is specifically used for outputting vision test information; collecting user feedback information based on the vision test information; and determining and storing the visual condition parameters of the user according to the user feedback information and the visual distance and/or visual angle of the user relative to the display unit during the test.
Based on the scheme, the first obtaining unit is specifically used for collecting user identification features; and inquiring the pre-stored visual condition parameters according to the user identification characteristics.
Based on the above scheme, the first obtaining unit is specifically configured to collect facial features of the user.
Based on the scheme, the second acquisition unit is specifically used for acquiring the user image by using the infrared camera; and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
Based on the above scheme, the second obtaining unit is specifically configured to collect the user image at regular time;
the determining unit is further used for determining whether the user watches the display unit according to the user image;
the second obtaining unit is further configured to analyze the user image to determine the viewing distance and/or the viewing angle if the user is watching the display unit.
The information processing method and the electronic device provided by the embodiment of the invention can acquire the visual condition parameters of the user, determine the pre-filtering parameters by combining at least one of the viewing angle and/or the viewing distance watched by the user, and utilize the light field adjusted by the pre-filtering parameters to output the display picture which can be more clearly reflected in the eyes of people with visual defects.
Drawings
Fig. 1 is a schematic flowchart of a first information processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a display of vision testing information according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a comparison of user viewing effects of displayed images before and after adjusting a light field according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a second information processing method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a third information processing method according to an embodiment of the present invention;
fig. 7 is a schematic flowchart of a fourth information processing method according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a user viewing display unit according to an embodiment of the present invention;
FIG. 9 is a schematic view of the airspace that the display and the user view according to the embodiment of the present invention;
fig. 10 is a schematic diagram of the frequency domain corresponding to fig. 9.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
The first embodiment is as follows:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: acquiring visual condition parameters of a user;
step S120: acquiring the visual distance and/or visual angle of a user relative to a display unit;
step S130: determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
step S140: and adjusting the light field of the display unit according to the pre-filtering parameter.
The information processing method described in this embodiment can be applied to various electronic devices with light field display units. The electronic device can be a mobile phone, a tablet computer, a wearable device or an electronic book, and is particularly suitable for a personal electronic device watched by a small number of users. The personal electronic device to which the information processing method of this embodiment can be applied may include a mobile phone, a tablet computer, an electronic book, or a personal computer.
The visual condition parameters in the present embodiment may include various parameters characterizing the visual condition of the user, for example, a visual defect parameter capable of characterizing the visual defect of the user.
In this embodiment, the step S110 may acquire the visual condition parameters of the user, and may include: visual condition parameters are received for input from a human-machine interaction interface. The visual condition parameter can be various information for representing the visual condition of the user, such as the degree of glasses worn by the user and information such as the visual test value of the user, which are received from the man-machine interaction interface. In step S110, by receiving the information, it can be analyzed whether the user has a problem of visual defects or what visual defects exist; severity of visual impairment, etc.
In step S120, a distance between the user and the display unit when the user is currently viewing the display unit is obtained, and the distance is referred to as a viewing distance in this embodiment, and a viewing angle between the user and the display unit is also obtained, and the viewing angle is directly referred to as a viewing angle in this embodiment. The viewing distance and the viewing angle can apply various viewing feelings such as the definition of the user viewing. In step S120, information such as a distance from the user to the display unit and a viewing angle with respect to the display unit may be determined by image acquisition using a three-dimensional image acquisition unit.
In step S130, the pre-filtering parameter is determined in combination with the visual condition parameter and at least one of the viewing distance and the viewing angle of the display unit currently viewed by the user. The pre-filtering parameter may include parameters such as the phase of light displayed by the display unit, so that a display screen most suitable for the current user can be obtained. On the one hand, the visual defect protection device can be used for visual defect users to improve the viewing feeling of the users such as viewing definition and the like, and on the other hand, aiming at some non-visual defect users, the visual defect phenomenon caused by long-term viewing of the users when the visual distance or the visual angle is inappropriate is reduced, so that the vision of the users is protected.
For example, by finding in step S110 that the user is a visually impaired person such as near-sighted or far-sighted, the light field of the display unit will be adjusted after the pre-filtering parameter applicable to the visually impaired user at the current viewing position in step S130. In this embodiment, adjusting the light field may include adjusting parameters such as phase, light intensity, and color tone of the light output by the display unit, so that the picture displayed by the display unit can be more clearly displayed in the eyes of the visually impaired user, thereby obtaining better viewing experience of the user.
For another example, it is found that the user has normal eyesight through step S110, but the visual distance and the visual angle are obtained through the combination of the steps, the user may be closer to the display unit, and in order to reduce the frequency of the visual defect caused by long-time viewing of the user, in this embodiment, the corresponding pre-filtering parameter is determined to adjust the light field, so as to reduce the problem of the visual defect caused by long-time viewing at an improper position.
Example two:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: acquiring visual condition parameters of a user;
step S120: acquiring the visual distance and/or visual angle of a user relative to a display unit;
step S130: determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
step S140: and adjusting the light field of the display unit according to the pre-filtering parameter.
The step S110 includes:
outputting vision test information;
collecting user feedback information based on the vision test information;
and determining and storing the visual condition parameters of the user according to the user feedback information and the visual distance and/or visual angle of the user relative to the display unit during the test.
The vision test information may be a vision test icon. A common standard logarithmic visual chart is shown in fig. 2. After outputting the vision test information, user test answers may be collected. Acquiring user gestures through a camera to obtain user test answers; for example, the voice of the user is collected by using an audio collection device, and information such as a test answer of the user is obtained. The user tests one of the user feedback information, and the user feedback information may further include information such as reply information indicating whether the user can view part or all of the clearly displayed vision test information. In summary, the user feedback information can represent the vision test result of the user viewing the vision test information at the current position.
Further, whether the vision of the user is defective or not, or what defect or degree of vision defect exists can be determined according to the feedback information of the user, the visual distance and/or the visual angle during the test; in this way, the visual condition parameters are simply and accurately acquired.
In the specific implementation process, the user identification information can be stored and the vision condition information of the user can be obtained, so that when the same user watches the electronic equipment next time, the vision test is not required to be carried out at every time, the time and the operation spent in the vision test are reduced, the processing operation of the information processing method of the embodiment is simplified, and the processing efficiency is improved. Of course, in order to avoid the transition of the vision condition of the user, the vision condition of the user can be determined at regular time when the user is in specific implementation, unlike the stored example condition information. For example, the effective time of the visual condition parameter determined by one vision test of the user is specified, and if the specified time is exceeded, the vision of the user needs to be retested.
In summary, the present embodiment provides a method for easily and accurately acquiring the vision parameters of the user. The left diagram in fig. 3 is a schematic view of the viewing effect of the user viewing the vision test information before the light field is not adjusted; the right diagram of fig. 3 is a schematic view of the viewing effect of the user viewing the vision test information after adjusting the light field. As can be seen from the comparison in fig. 3, after the light field is adjusted, the viewing clarity is obviously improved for the user with visual impairment.
Example three:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: acquiring visual condition parameters of a user;
step S120: acquiring the visual distance and/or visual angle of a user relative to a display unit;
step S130: determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
step S140: and adjusting the light field of the display unit according to the pre-filtering parameter.
The step S110 may include:
collecting user identification characteristics;
and inquiring the pre-stored visual condition parameters according to the user identification characteristics.
If the electronic device stores some visual condition information of the user in advance, for example, the electronic device receives some visual condition information of the user from the human-computer interaction interface in advance, or obtains the visual condition parameters of the user by using the method described in the second embodiment. In this embodiment, a user identification feature is first collected, where the collecting of the user identification feature includes receiving user identification information such as a user name input by a user, and may also collect user voiceprint or fingerprint biometric information. In any case, an identification feature that can identify the user. After the user identification features of the user are collected, the visual condition information of the current user can be obtained by inquiring the pre-stored visual condition information.
As a further improvement of this embodiment, the acquiring the user identification features may include: the facial features of the user are collected. In the embodiment, the image collector can be used for collecting the face image of the user, and the face characteristics of the user are obtained through the face recognition technology. In step S120, an image collector may also be used to collect images of the user, and the viewing angle and/or the viewing distance are determined through the images, so that step S110 and step S120 may multiplex the same frame of image to obtain the user identification feature and the viewing angle and/or the viewing distance. Therefore, the operation of the electronic equipment can be simplified, and the images needing to be collected are reduced.
Example four:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: acquiring visual condition parameters of a user;
step S120: acquiring the visual distance and/or visual angle of a user relative to a display unit;
step S130: determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
step S140: and adjusting the light field of the display unit according to the pre-filtering parameter.
As shown in fig. 4, step S120 may include:
step S121: acquiring a user image by using an infrared camera;
step S122: and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
In the present embodiment, it is to be utilized to capture user images, and the position relationship between the infrared camera and the display unit is generally known in the present embodiment, and in order to simplify the analysis of the viewing distance and the viewing angle, the infrared camera is generally disposed at the same position as the display unit, for example, the image capture unit is disposed integrally with the display unit. After the user image is acquired, the user image is analyzed, and in combination with various acquisition parameters, such as a focal length, the distance of the user relative to the display unit (i.e., the viewing distance) can be calculated, and then the angle of view of the user relative to the display unit can be determined by analyzing the acquired user image. Typically in this embodiment the captured user image comprises at least the user's eye. When image acquisition is performed, the electronic device may output prompt information to prompt that the user views the display screen in front, so that when the infrared camera acquires the user' S eyes through a preview image obtained by image preview, the corresponding preview image is stored, and it is determined that the user image in step S120 is completed.
Of course, in the embodiment, the user image is collected by using the infrared camera, and in a specific implementation, parameters such as a visual distance and/or an angle of view of the user's eyes relative to the display unit may be determined by using an eye tracker, a user pupil tracking technology, and the like, which is not limited to the above-mentioned method. The infrared camera in the embodiment is only used for collecting the user image to obtain the visual distance and/or the visual angle, and the infrared camera has the characteristics of simple realization and simple hardware structure. The infrared cameras are integrated on a plurality of electronic devices, so that the acquisition of the visual distance and/or the visual angle can be completed by using the existing devices, the effective utilization rate of the existing devices is improved, and the hardware cost is reduced.
It is worth noting that: the information processing method described in this embodiment is a further improvement on the basis of the technical solution provided in any of the foregoing embodiments. For example, in the information processing method according to this embodiment, the visual condition information of the user may also be determined through output of the vision test information and acquisition of user feedback information, or the visual condition information of the user may also be acquired through acquisition of the user identification feature and through query. In short, the information processing method according to the present embodiment can be combined with any one of the technical solutions described in the foregoing embodiments to obtain a further technical solution, in terms of the technique that the technical solutions disclosed in the foregoing embodiments do not contradict each other.
Example five:
as shown in fig. 1, the present embodiment provides an information processing method, including:
step S110: acquiring visual condition parameters of a user;
step S120: acquiring the visual distance and/or visual angle of a user relative to a display unit;
step S130: determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
step S140: and adjusting the light field of the display unit according to the pre-filtering parameter.
As shown in fig. 4, step S120 may include:
step S121: acquiring a user image by using an infrared camera;
step S122: and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
The step S121 may include: acquiring the user image at regular time; the method further comprises the following steps: determining whether a user is watching the display unit according to the user image.
The step S122 may include: and if the user watches the display unit, analyzing the user image to determine the sight distance and/or the visual angle.
In order to reduce the problem that the user moves his/her own position in the process of watching the display unit, which results in the change of the viewing distance and/or the viewing angle and the determined pre-filtering parameter is no longer suitable for the current watching scene of the user, in this embodiment, the user image is collected periodically, specifically, periodically or non-periodically, and the viewing distance and/or the viewing angle is re-determined, so as to more accurately control the light field of the display unit, which can ensure that the user with visual impairment can see more clearly and obtain better watching experience; on the other hand, the phenomenon that frequent vision defects are caused by too small visual distance or improper visual angle of a user can be reduced.
Example six:
as shown in fig. 5, the present embodiment provides an electronic device, including:
a first obtaining unit 110, configured to obtain a visual condition parameter of a user;
a second acquiring unit 120 for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit 130 for determining a pre-filtering parameter based on the visual condition parameter and the viewing distance and/or the viewing angle;
and an adjusting unit 140, configured to adjust the light field of the display unit according to the pre-filtering parameter.
The electronic apparatus of the present embodiment may be various electronic apparatuses including a display unit, for example, a personal electronic apparatus. The personal electronic device may include various electronic devices such as a personal computer, a mobile phone, an electronic reader, a digital assistant, and the like.
The first obtaining unit 110 may include a collecting sensor, and obtains the visual condition information of the user by collecting information. The first acquiring unit 110 may also correspond to various types of communication interfaces such as a receiving interface, and may be capable of receiving visual condition information input by a user or receiving the visual condition information from other electronic devices.
The second obtaining unit 120, the determining unit 130 and the adjusting unit 130 may correspond to a processor or a processing circuit; the processor may be a central processing unit, a digital signal processor, a microprocessor, or a programmable array of processing structures. The processing circuit may comprise an application specific integrated circuit. The processor may include processing structures such as an image processor of the display unit.
In summary, the processor or the processing circuit can implement the above-mentioned acquisition of the viewing distance or the viewing angle by executing a predetermined instruction, and determine the pre-filtering parameter to adjust the light field of the display unit. The electronic equipment that this embodiment provided can acquire through at least one of them of visual condition parameter and visual angle and stadia, obtains the square that is suitable for the user to watch, can make the defective crowd of vision receive more clear display like this, also can reduce the defective vision problem that the normal crowd of vision watched the result simultaneously.
Example seven:
as shown in fig. 5, the present embodiment provides an electronic device, including:
a first obtaining unit 110, configured to obtain a visual condition parameter of a user;
a second acquiring unit 120 for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit 130 for determining a pre-filtering parameter based on the visual condition parameter and the viewing distance and/or the viewing angle;
and an adjusting unit 140, configured to adjust the light field of the display unit according to the pre-filtering parameter.
The first obtaining unit 110 is specifically configured to output vision test information; collecting user feedback information based on the vision test information; and determining and storing the visual condition parameters of the user according to the user feedback information and the visual distance and/or visual angle of the user relative to the display unit during the test.
In this embodiment, the first obtaining unit 110 may correspond to an output module, and the output module may be configured to control the display unit to output the vision testing information. The vision testing information in this embodiment may be a standard logarithmic visual acuity chart as shown in fig. 2. The first obtaining unit may also correspond to a collector or an input device, and is configured to collect user feedback information. The first obtaining unit 110 may further correspond to a processor or a processing circuit, and determine the parameters of the forehead vision condition of the user by combining the user feedback information of the collector and the parameters of the viewing angle or the viewing distance.
The present embodiment provides a first obtaining unit 110 for simply and accurately obtaining the visual condition information of the user on the basis of the foregoing embodiments.
In a specific implementation process, the electronic device may further include a storage unit, where the storage unit may correspond to various storage media, and may be configured to store the visual condition parameters and the user identification features acquired by the first acquiring unit, so that the subsequent first acquiring unit may directly read the visual condition parameters of the user from the storage unit.
Example eight:
as shown in fig. 5, the present embodiment provides an electronic device, including:
a first obtaining unit 110, configured to obtain a visual condition parameter of a user;
a second acquiring unit 120 for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit 130 for determining a pre-filtering parameter based on the visual condition parameter and the viewing distance and/or the viewing angle;
and an adjusting unit 140, configured to adjust the light field of the display unit according to the pre-filtering parameter.
The first obtaining unit 110 is specifically configured to collect a user identification feature; and inquiring the pre-stored visual condition parameters according to the user identification characteristics.
In this embodiment, the first obtaining unit 110 is specifically configured to obtain a secondary user identification feature, for example, information such as a facial feature, an iris feature, a fingerprint feature, or a temperature rising feature of a user, or an account password input by the user. In summary, the user identification features may refer to information of different users.
The first obtaining unit 110 finally queries the visual condition parameters stored in advance according to the user identification features, for example, queries other electronic devices, or queries a local storage unit. If the first obtaining unit 110 queries other electronic devices, the first obtaining unit 110 performs information interaction with other electronic devices by using an external communication interface corresponding to the external communication interface, so as to obtain the visual status information. Of course, the first obtaining unit 110 may also correspond to an internal communication interface, and the internal communication interface may be used to read the visual condition information in the local storage unit.
In short, the first obtaining unit 110 in this embodiment may query the pre-stored visual condition information by collecting the user identification features and using the user identification features as a query basis, so as to obtain the visual condition information, which has the characteristics of simple and convenient obtaining of the visual condition information and simple structure.
As a further improvement of this embodiment, the first obtaining unit 110 is specifically configured to collect facial features of a user.
In this embodiment, the user identification feature acquired by the first acquiring unit 110 is a user facial feature. In this embodiment, the first acquiring unit 110 and the second acquiring unit 120 may respectively acquire facial features or a viewing distance or a viewing angle of a user through the acquired images. The first acquiring unit 110 and the second acquiring unit 120 may even acquire the facial features or the viewing distance or the viewing angle of the user by using the same user image, so that the acquired user image may be reduced, and the operation of the electronic device may be reduced.
Example nine:
as shown in fig. 5, the present embodiment provides an electronic device, including:
a first obtaining unit 110, configured to obtain a visual condition parameter of a user;
a second acquiring unit 120 for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit 130 for determining a pre-filtering parameter based on the visual condition parameter and the viewing distance and/or the viewing angle;
and an adjusting unit 140, configured to adjust the light field of the display unit according to the pre-filtering parameter.
The second obtaining unit 120 is specifically configured to collect a user image by using an infrared camera; and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
In this embodiment, the second obtaining unit 120 may correspond to an infrared camera or a control device connected to the infrared camera, and may be configured to collect an image of a user by using the infrared camera, and obtain a viewing distance and/or a viewing angle by analyzing the image of the user by using a processor or a processing circuit. Referring to the corresponding method embodiment, the second obtaining unit 120 has the characteristics of simple hardware structure, high hardware utilization rate and simple implementation.
Example ten:
as shown in fig. 5, the present embodiment provides an electronic device, including:
a first obtaining unit 110, configured to obtain a visual condition parameter of a user;
a second acquiring unit 120 for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit 130 for determining a pre-filtering parameter based on the visual condition parameter and the viewing distance and/or the viewing angle;
and an adjusting unit 140, configured to adjust the light field of the display unit according to the pre-filtering parameter.
The second obtaining unit 120 is specifically configured to collect the user image at regular time;
the determining unit 130 is further configured to determine whether a user is watching the display unit according to the user image;
the second obtaining unit 120 is further configured to, if the user watches the display unit, analyze the user image to determine the viewing distance and/or the viewing angle.
In this embodiment, the determining unit 130 is further adapted to determine whether the user is watching the display unit by using the user image, for example, an image capturing device is integrated with the display unit, a capturing surface of the image capturing device is consistent with a display surface of the display unit, and the user image captured at this time does not include the image of the user's eye, so that the user is considered to be watching the display unit, and if the image of the user's eye is not included, the user is considered not watching the display unit.
In this embodiment, the second obtaining unit 120 may further correspond to a timer timing structure, and is capable of obtaining at least one of the visual distance and the visual distance of the user at regular time, so as to trigger the adjustment of the light field of the display unit at regular time, and thereby output a display image suitable for the user to view.
Several specific examples are provided below in connection with any of the above embodiments:
example one:
as shown in fig. 6, the present example provides an information processing method including:
step S1: after the screen is opened, prompting the user to enter a calibration flow, opening the IR camera and detecting the eye position of the user. For example, the screen prompts for entering a calibration procedure by displaying a prompt message.
Step S2: and displaying the test image. The test image here may be the image shown in fig. 2.
Step S3: prompting the user to move to a location where viewing is unclear.
Step S4: and determining whether the user can ask the blood silk to see the screen display content according to the user feedback information, if so, entering step S5, and if not, entering step S7.
Step S5: and acquiring the visual defect characteristics of the user.
Step S6: and extracting and recording the visual defect characteristic value of the user.
Step S7: the viewing distance and angle of the user's glasses from the screen are determined.
Step S8: pre-filtering parameters for adjusting the light field are calculated.
Step S9: the user changes the operation and proceeds to step S2.
The screen in this example is the display unit in the foregoing embodiment, and the screen here may be a light field display. The visual defect feature value and the visual defect feature may be composition information of the visual condition information.
FIG. 8 is a schematic view of a light field display viewed by a user's eye; in fig. 8 x denotes the retinal coordinate; u represents the lens coordinates; y represents the light field display coordinates; x0 denotes the retina center coordinates; u1 denotes the lens center coordinates; y0 denotes the light field display center coordinates. DeRepresents the lens-to-retina distance; d0Representing the distance of the lens from the light field display. u0 represents one coordinate where the ray from y0 enters the lens.
Fig. 9 is a schematic diagram of the display effect on 3 spatial domains. The left diagram in fig. 9 is a schematic diagram of the display effect of the light field display; the middle and right diagrams shown in fig. 9 are the display effects of the general display in the focused state and the unfocused state, respectively.
Fig. 10 is a schematic diagram of the display effect corresponding to fig. 9 in the frequency domain.
The light field emitted by the display is parameterized by y, x, u coordinates. The left diagram of fig. 10 shows a schematic propagation of light. The middle diagram of fig. 10 is a schematic view of the user's eye in a state of being able to focus, in which the light field incident on the retina is a horizontal line as viewed on the screen. As in fig. 10; if the user's eye is not focused and the effect of the user's view is as shown in the right diagram of fig. 10, then the light field incident on the retina is a diagonal line in the frequency domain, so that some frequencies suffer energy loss (2D display (unfocused state), which can be done by adjusting the spatial frequency content of the light field to make the eye perceive a sharp image.
Example two:
as shown in fig. 7, the present example provides an information processing method including:
step S11: after the screen is opened and user agreement is obtained, the IR camera is opened to collect the user image. The IR camera here is the aforementioned infrared camera, and certainly may be a color infrared camera in a specific implementation process.
Step S12: it is determined whether the user is looking at the screen using the user image, if so, to step S14, and if not, to step S13.
Step S13: separated by one processing time sequence; and proceeds to step S12.
Step S14: and confirming the visual angle and the visual distance.
Step S15: an adjusted pre-filtering parameter is calculated.
Step S16: the light field is adjusted for display, and the process returns to step S13. The specific adjustment here is to adjust the light field using the pre-filtering parameters determined in step S15.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. An information processing method, characterized in that the method comprises:
acquiring visual condition parameters of a user; the visual condition parameters comprise visual defect parameters characterizing visual defects of the user;
acquiring the visual distance and/or visual angle of a user relative to a display unit;
determining a pre-filtering parameter according to the visual condition parameter and the visual distance and/or the visual angle;
adjusting the light field of the display unit according to the pre-filtering parameter; the adjusting the light field of the display unit includes adjusting a spatial frequency composition of the light field output by the display unit.
2. The method of claim 1,
the acquiring of the visual condition parameters of the user comprises:
outputting vision test information;
collecting user feedback information based on the vision test information;
and determining and storing the visual condition parameters of the user according to the user feedback information and the visual distance and/or visual angle of the user relative to the display unit during the test.
3. The method of claim 2,
the acquiring of the visual condition parameters of the user comprises:
collecting user identification characteristics;
and inquiring the pre-stored visual condition parameters according to the user identification characteristics.
4. The method of claim 3,
the collecting user identification features includes:
the facial features of the user are collected.
5. The method according to any one of claims 1 to 4,
the acquiring of the visual distance and/or the visual angle of the user relative to the display unit comprises:
acquiring a user image by using an infrared camera;
and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
6. The method of claim 5,
utilize infrared camera to gather user's image, include:
acquiring the user image at regular time;
the method further comprises the following steps:
determining whether a user is watching the display unit according to the user image;
the analyzing the user image and determining the visual distance and/or the visual angle of the user relative to the display unit comprises:
and if the user watches the display unit, analyzing the user image to determine the sight distance and/or the visual angle.
7. An electronic device, characterized in that the electronic device comprises:
the first acquisition unit is used for acquiring visual condition parameters of a user; the visual condition parameters comprise visual defect parameters characterizing visual defects of the user;
a second acquisition unit for acquiring a viewing distance and/or a viewing angle of a user with respect to the display unit;
a determining unit for determining a pre-filtering parameter based on the vision condition parameter and the viewing distance and/or the viewing angle;
the adjusting unit is used for adjusting the light field of the display unit according to the pre-filtering parameter; the adjusting the light field of the display unit includes adjusting a spatial frequency composition of the light field output by the display unit.
8. The electronic device of claim 7,
the first acquisition unit is specifically used for outputting vision test information; collecting user feedback information based on the vision test information; and determining and storing the visual condition parameters of the user according to the user feedback information and the visual distance and/or visual angle of the user relative to the display unit during the test.
9. The electronic device of claim 7,
the first acquisition unit is specifically used for acquiring user identification characteristics; and inquiring the pre-stored visual condition parameters according to the user identification characteristics.
10. The electronic device of claim 9,
the first acquisition unit is specifically used for acquiring facial features of a user.
11. The electronic device of any of claims 7-10,
the second acquisition unit is specifically used for acquiring a user image by using an infrared camera; and analyzing the user image, and determining the visual distance and/or visual angle of the user relative to the display unit.
12. The electronic device of claim 11,
the second acquiring unit is specifically used for acquiring the user image at regular time;
the determining unit is further used for determining whether the user watches the display unit according to the user image;
the second obtaining unit is further configured to analyze the user image to determine the viewing distance and/or the viewing angle if the user is watching the display unit.
CN201610105775.4A 2016-02-25 2016-02-25 Information processing method and electronic equipment Active CN105786430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610105775.4A CN105786430B (en) 2016-02-25 2016-02-25 Information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610105775.4A CN105786430B (en) 2016-02-25 2016-02-25 Information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105786430A CN105786430A (en) 2016-07-20
CN105786430B true CN105786430B (en) 2020-08-25

Family

ID=56402286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610105775.4A Active CN105786430B (en) 2016-02-25 2016-02-25 Information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105786430B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898063A (en) * 2016-04-29 2016-08-24 乐视控股(北京)有限公司 System and method capable of enabling myopic person to see contents on screen of mobile terminal clearly

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101133438A (en) * 2005-03-01 2008-02-27 松下电器产业株式会社 Electronic display medium and screen display control method used for electronic display medium
CN101419528A (en) * 2007-10-24 2009-04-29 兄弟工业株式会社 Data processing device
CN102693110A (en) * 2011-03-25 2012-09-26 鸿富锦精密工业(深圳)有限公司 System and method for dynamically adjusting font size
CN102693068A (en) * 2011-03-25 2012-09-26 京东方科技集团股份有限公司 Vision expanded display equipment and method
CN104331168A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Display adjusting method and electronic equipment
CN104699250A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Display control method, display control device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101133438A (en) * 2005-03-01 2008-02-27 松下电器产业株式会社 Electronic display medium and screen display control method used for electronic display medium
CN101419528A (en) * 2007-10-24 2009-04-29 兄弟工业株式会社 Data processing device
CN102693110A (en) * 2011-03-25 2012-09-26 鸿富锦精密工业(深圳)有限公司 System and method for dynamically adjusting font size
CN102693068A (en) * 2011-03-25 2012-09-26 京东方科技集团股份有限公司 Vision expanded display equipment and method
CN104331168A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Display adjusting method and electronic equipment
CN104699250A (en) * 2015-03-31 2015-06-10 小米科技有限责任公司 Display control method, display control device and electronic equipment

Also Published As

Publication number Publication date
CN105786430A (en) 2016-07-20

Similar Documents

Publication Publication Date Title
CN108427503B (en) Human eye tracking method and human eye tracking device
CN109086726B (en) Local image identification method and system based on AR intelligent glasses
US9626561B2 (en) Method and apparatus for connecting devices using eye tracking
WO2020020022A1 (en) Method for visual recognition and system thereof
JP6263617B2 (en) Display control method and apparatus, electronic device
US9510748B2 (en) Wavefront generation for ophthalmic applications
US10806364B2 (en) Methods and apparatuses for electrooculogram detection, and corresponding portable devices
CN103190883A (en) Head-mounted display device and image adjusting method
CN109032351B (en) Fixation point function determination method, fixation point determination device and terminal equipment
CN112666705A (en) Eye movement tracking device and eye movement tracking method
CN113467619B (en) Picture display method and device, storage medium and electronic equipment
CN111445413B (en) Image processing method, device, electronic equipment and storage medium
US20230080861A1 (en) Automatic Iris Capturing Method And Apparatus, Computer-Readable Storage Medium, And Computer Device
US10108259B2 (en) Interaction method, interaction apparatus and user equipment
CN103678971A (en) User information extracting method and device
CN113325947A (en) Display method, display device, terminal equipment and storage medium
CN105786430B (en) Information processing method and electronic equipment
CN114021211A (en) Intelligent peep-proof system
CN113491502A (en) Eyeball tracking calibration inspection method, device, equipment and storage medium
WO2022247482A1 (en) Virtual display device and virtual display method
CN112651270A (en) Gaze information determination method and apparatus, terminal device and display object
CN106557738A (en) A kind of long-distance iris identification device
CN114281236B (en) Text processing method, apparatus, device, medium, and program product
CN113891002B (en) Shooting method and device
WO2022007247A1 (en) Head-mounted device and rendering method therefor, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant