WO2018196444A1 - 虚拟现实播放设备及其控制方法及计算机可读存储介质 - Google Patents

虚拟现实播放设备及其控制方法及计算机可读存储介质 Download PDF

Info

Publication number
WO2018196444A1
WO2018196444A1 PCT/CN2018/072136 CN2018072136W WO2018196444A1 WO 2018196444 A1 WO2018196444 A1 WO 2018196444A1 CN 2018072136 W CN2018072136 W CN 2018072136W WO 2018196444 A1 WO2018196444 A1 WO 2018196444A1
Authority
WO
WIPO (PCT)
Prior art keywords
human eye
optical lens
virtual reality
pupil
playing device
Prior art date
Application number
PCT/CN2018/072136
Other languages
English (en)
French (fr)
Inventor
吴永辉
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018196444A1 publication Critical patent/WO2018196444A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B13/00Optical objectives specially designed for the purposes specified below
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/02Mountings, adjusting means, or light-tight connections, for optical elements for lenses
    • G02B7/04Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
    • G02B7/09Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted for automatic focusing or varying magnification

Definitions

  • the present disclosure relates to, but is not limited to, the field of broadcasting, and in particular to a virtual reality playing device, a control method thereof, and a computer readable storage medium.
  • VR glasses are called "Virtual Reality Glasses". VR glasses use mobile phones as display screens and implement virtual reality functions through related applications on mobile phones.
  • the virtual reality playing device relates to an interaction relationship between an optical lens, a display screen and a human eye, and the optical center positions of the three components are only on the same straight line to achieve the best visual effect.
  • the optical lens is located between the three components, so it is important to adjust the position of the optical lens.
  • the present disclosure provides a virtual reality playing device, a control method thereof, and a computer readable storage medium, which enable a virtual reality playing device to automatically adjust an optical lens to accommodate users having different eye features.
  • the present disclosure provides a control method of a virtual reality playing device, where the virtual reality playing device includes a movably disposed optical lens, and the control method of the virtual reality playing device includes the following steps:
  • the optical lens is controlled to move to the focus position.
  • the step of acquiring a human eye image obtained by imaging the optical lens through the optical lens during the moving process comprises:
  • the contrasting principle is used to analyze and obtain the clearest human eye image
  • the determining the focus position of the optical lens according to the human eye image comprises determining a focus position of the optical lens according to the clearest human eye image.
  • the determining the focus position of the optical lens according to the clearest human eye image comprises:
  • the position is adapted to the refractive power of the human eye.
  • the determining the focus position of the optical lens according to the clearest human eye image comprises:
  • the lateral focus position of the optical lens is determined based on the human eye data to accommodate the pupil distance of the human eye.
  • the step of acquiring human eye data according to the clearest human eye image includes:
  • the pupil distance is calculated from the pupil position of the human eye.
  • the step of calculating a lay length according to the position of the human eye pupil comprises:
  • the analysis determines the preset type of the pupil position of the human eye
  • the pupil distance calculation method is used to calculate the pupil distance according to the preset distance type.
  • the step of analyzing the acquired human eye image to obtain the clearest human eye image includes:
  • the contrasting principle is used to analyze the sharpest image.
  • the step of determining the focus position of the optical lens according to the clearest human eye image further includes:
  • human eye data is acquired based on the clearest human eye image.
  • the present disclosure also provides a virtual reality playback device including a movably disposed optical lens, a beam splitter positioned between a user's eyes and a screen of the playback device, configured to capture a reflection at the beam splitter a camera that passes over the human eye image imaged by the optical lens; the virtual reality playback device further includes a memory, a control processor, and a control program stored on the memory and operable on the control processor, The steps of the method as described above are implemented when the control program is executed by the processor.
  • the virtual reality playing device further includes a driving circuit
  • the optical lens includes a bracket, a lens disposed on the bracket, a conductive coil electrically connected to the driving circuit, and are respectively disposed on An N-pole magnet and an S-pole magnet at both ends of the lens, the conductive coil generating a Lorentz force by a magnetic field generated by the N-pole magnet and the S-pole magnet and a current transmitted from the driving circuit to the conductive coil Move back and forth.
  • the present disclosure also proposes a computer readable storage medium having stored thereon a control program, the control program being executed by the control processor to implement the steps of the control method of the virtual reality playback device as described above.
  • the present disclosure also provides a computer readable storage medium storing computer executable instructions that, when executed, implement a control method of the virtual reality playback device.
  • the control method of the virtual reality playing device proposed by the present disclosure is: controlling the optical lens to move in the virtual reality playing device; acquiring a human eye image obtained by imaging the optical lens through the optical lens during moving; The human eye image determines a focus position of the optical lens; controlling the optical lens to move to the focus position.
  • the method of the present disclosure can adjust the position of the optical lens according to different eye features of the user, realize focusing of the optical lens, and bring a clear visual experience for different users.
  • FIG. 1 is a simplified schematic diagram of a virtual reality playing device according to the present disclosure
  • FIG. 2 is an optical path transmission diagram of a virtual reality playing device according to the present disclosure
  • 3 is a conversion diagram of image coordinates captured by the virtual reality playing device of the present disclosure converted into distance data
  • FIG. 4 is a schematic structural view of an optical lens of the present disclosure
  • FIG. 5 is a schematic flowchart diagram of a method for controlling a virtual reality playback device according to a first embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of a method for controlling a virtual reality playing device according to a second embodiment of the present disclosure
  • FIG. 7 is a schematic flowchart diagram of a control method of a virtual reality playback device according to a third embodiment of the present disclosure.
  • FIG. 8 is a schematic flowchart of a method for controlling a virtual reality playing device according to a fourth embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a virtual reality playback device of the present disclosure.
  • FIG. 1 is a schematic structural diagram of a virtual reality playing device 100 according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a method for controlling a virtual reality playing device according to an embodiment of the present disclosure.
  • the virtual reality playback device 100 includes an optical lens 101.
  • control method of the virtual reality playing device includes:
  • Step S10 controlling the optical lens to move in the virtual reality playing device
  • the control method of the virtual reality playing device in the embodiment is applicable to the virtual reality playing device 100.
  • the virtual reality playing device 100 is movably provided with an optical lens 101, and the optical lens 101 is two, respectively corresponding to the left and right In two positions of the eye, the optical lens 101 can be moved back and forth and left and right in a direction away from or close to the position of the human eye under the control of the control system.
  • the virtual reality playing device 100 may further be provided with a camera 102.
  • the number of the cameras 102 may be two, which are respectively disposed on the upper side or the lower side of the optical lens 101, respectively being left cameras [102 -1] and right camera [102-2], images are captured by the camera 102.
  • the camera 102 does not directly capture the human eye image, but acquires an image of the human eye after imaging on the optical lens 101.
  • the imaging path of the camera 102 is as follows: Referring to FIG. 2, the camera 102 captures an image transmitted from the optical lens 101 and reflected to the beam splitter 103.
  • the camera 102 may also be provided.
  • the one camera 102 is disposed on the center line of the left optical lens 101 and the right optical lens 101, and the coordinate conversion method of the captured image may be according to the set camera 102. The position is adjusted accordingly.
  • the optical center of the captured image of the camera 102 may be required to coincide with the optical center of the optical lens 101 (ie, the left camera image center and the left side).
  • the center of the optical lens 101 is X1, the center of the right camera image and the center of the right optical lens are X2), the coordinate dot is the center point of the left and right optical lenses, and the images taken by the left and right cameras are respectively P1 and P2, and the angles of view of the left and right cameras are respectively ⁇ 1 and ⁇ 2, the object distance is D1 and D2.
  • the virtual reality playing device 100 may further include a driving circuit 1015.
  • the optical lens 101 may include a bracket (not shown) and a lens 1011 disposed on the bracket and electrically connected to the driving circuit 1015.
  • the conductive coil 1012 and the N-pole magnet 1013 and the S-pole magnet 1014 respectively disposed at both ends of the optical lens 1011.
  • the conductive coil 1012 of the optical lens 101 and the N-pole magnet 1013 and the S-pole magnet 1014 respectively disposed at two ends of the optical lens 101 near the conductive coil 1012 may constitute a voice coil motor, and when the driving circuit 1015 gives the conductive coil When the 1012 is energized, the conductive coil 1012 will be subjected to the Lorentz force under the action of the magnetic field, and the optical lens 1011 will be moved back and forth under the action of the Lorentz force. It should be noted that both ends of the optical lens 101 refer to both ends in the up and down direction with reference to a normal human eye.
  • the working principle is that when the driving circuit 1015 supplies current to the conductive coil 1012, the direction of the force of the conductive coil 1012 can be obtained according to the Lorentz force received by the conductive coil 1012 in the magnetic field and the direction of the current, that is, according to The left-hand rule determines the force of the conductive coil 1012 in the magnetic field. Further, the force of the optical lens 1011 can be changed by changing the direction of the current supplied from the drive circuit 1015, and the distance moved by the lens 1011 can be controlled according to the magnitude of the current supplied from the drive circuit 1015.
  • the driving circuit 1015 energizes the wire coil 1012 such that the optical lens 101 moves back and forth in a direction away from or near the position of the human eye under the action of the Lorentz force, wherein the optical lens 101 is controlled to move back and forth.
  • the distance between the human eye and the optical lens 101 can be adjusted, which is equivalent to the action of focusing.
  • the camera 102 can capture an image at each position of the optical lens 101, and obtain the clearest picture among the plurality of pictures by contrast contrast analysis, thereby moving and locking the optical lens 101 at the most The location of the clear picture corresponds. In this way, the position of the optical lens 101 can be adjusted according to the difference in vision of the human eye to adapt to different visual acuity.
  • the optical lens 101 of the virtual reality playing device 100 can be automatically adjusted to the original position, which can facilitate re-acquiring the user's eye data when the different users use the next time.
  • the position of the optical lens 101 is adjusted.
  • the optical lens 101 can also move left and right under the control of the electric motor.
  • the left and right movement of the optical lens 101 actually finds the position most suitable for the user's pupil distance according to the pupil distance of the human eye.
  • whether the optical lens 101 is moved left and right or previously moved can be set as needed to finally move the optical lens 101 to the focus position.
  • Step S20 acquiring a human eye image obtained by imaging the optical lens through the optical lens during the moving process
  • the camera 102 can capture the human eye image when the optical lens 101 moves to different positions, since the image captured by the camera 102 is transmitted through the optical lens 101 and reflected.
  • the image on the spectroscope in the process of moving the optical lens 101 back and forth, is equivalent to adjusting the distance from the eye to the object (ie, the object distance), and when the optical lens 101 is moving, corresponding to different positions, the captured image
  • the sharpness of the image may be different, so whether the position of the optical lens 101 is the most visible position of the human eye can be judged directly based on the image sharpness. That is, the process in which the optical lens 101 moves back and forth actually adjusts the position of the optical lens 101 according to the diopter of the human eye.
  • the camera 102 can capture and save the image corresponding to each step of the voice coil motor, and can store the captured image in the storage device provided by the virtual reality playing device, or can shoot the image.
  • the image is stored in a cloud server connected to a virtual reality playback device or a mobile phone.
  • images with different sharpness can be analyzed according to the contrast contrast principle to get the clearest image.
  • the principle is as follows: as the optical lens 101 starts to move, the picture gradually becomes clear, and the contrast starts to rise; when the picture is the clearest and the contrast is the highest, it is already in the focus state, but the camera 102 does not know, so the optical lens 101 will continue to be moved.
  • the optical lens 101 When it is found that the contrast begins to decrease, the optical lens 101 is moved, and the contrast is lowered again.
  • the camera 102 knows that the focus has been missed; the optical lens 101 retreats to the position with the highest contrast, completes the focus, and finds the clearest image.
  • the most clear image obtained can be recognized by the human eye, and whether the eye image of the clearest image is detected, and when the image of the eye is imaged, the next step can be continued; When the human eye image is taken, the image can continue to be captured until the human eye image in the captured image is acquired.
  • Step S30 determining a focus position of the optical lens according to the human eye image
  • Step S40 controlling the optical lens to move to the focus position
  • the optical lens is moved back and forth in a direction away from or close to the position of the human eye in the virtual reality playing device; and the optical lens is obtained after being imaged by the optical lens in the process of moving back and forth.
  • Human eye image analyzing the acquired human eye image to obtain the clearest human eye image; determining the focus position of the optical lens according to the clearest human eye image; controlling the optical lens to move to the focus position.
  • the method of the present disclosure can adjust the position of the optical lens according to different eye features of the user, realize focusing of the optical lens, and bring a clear visual experience for different users.
  • the focus position of the optical lens 101 causes the optical lens 101 to reach its focus position by controlling the optical lens 101 to move back and forth and to move left and right, including but not limited to an axial focus position and a lateral focus position.
  • the right and left movement of the optical lens 101 can be controlled according to the pupil distance of the human eye, and the axial focus position can be controlled according to the human eye diopter to control the back and forth movement of the optical lens 101.
  • the method for determining the axial focus position may include: after obtaining the clearest human eye image, analyzing the moving optical corresponding to the clearest human eye image formed by imaging the optical lens The position of the lens 101, in turn, determines the axial focus position of the optical lens 101.
  • the optical lens 101 is controlled to lock in the axial focus position after moving to the axial focus position.
  • the step S40 includes:
  • Step S41 acquiring human eye data according to the clearest human eye image
  • Step S42 determining a lateral focus position of the optical lens according to the human eye data to be adapted to the pupil distance of the human eye.
  • Different human eyes have deviations in their pupil position. For example, some people's left and right eyes are asymmetrical. Different human eyes may have the left eye center tilted to the left or to the right, or the center of the right eye is biased to the left or to the right. There is a deviation in the interpupillary distance, or the size of the eye will also cause a difference in the interpupillary distance, etc., and the optical lens 101 can be adjusted to move to the corresponding focus position according to the feature of the interpupillary distance.
  • acquiring human eye data according to the clearest human eye image may include acquiring iris information of the human eye from the clearest human eye image; and may also include obtaining the human eye through the clearest human eye image analysis.
  • the coordinate value of the leftmost end of the eye and the coordinate value of the rightmost end, and the coordinate values of the uppermost end and the lowermost end, the position of the center point is directly obtained by the above coordinate value, and the center point is defined as the position of the pupil of the human eye; of course, it may also include Whether or not the human eye has data such as astigmatism is obtained from the clearest human eye image, so that the position of the optical lens 101 can be adjusted based on the data.
  • the present embodiment acquires human eye data according to the clearest human eye image; and determines a lateral focus position of the optical lens based on the human eye data.
  • the step S41 includes:
  • Step S411 extracting human eye iris information according to the human eye image
  • Step S412 acquiring a pupil position of the human eye according to the iris information of the human eye;
  • Step S413 calculating a lay length according to the position of the human eye pupil.
  • the iris information of the human eye can be extracted from the acquired image of the human eye, and the position of the pupil of the human eye can be locked by extracting the iris information of the human eye, and the pupil position can be normalized, and the pupil position can be normalized to simplify the processing.
  • the process that is, directly considering the range of the peripheral edge region of the pupil, treats the pupil directly as a coordinate point for processing.
  • the pupil position can be converted to a point coordinate on the image taken by the camera.
  • the same processing can be used for the pupil positions of the left and right eyes, and the coordinate positions of the pupils of the left and right eyes on the image taken by the camera are obtained, and the pupil distance can be calculated according to the coordinate position.
  • the calculation method of the pupil distance may also be different according to the characteristics of the human eye.
  • the movement of the optical lens 101 can be controlled according to the obtained pupil distance, where the optical lens 101 can be moved substantially in the left-right direction with respect to the eye to accommodate users of different lay lengths.
  • the human eye iris information is extracted according to the human eye image, and the human eye pupil position is obtained according to the human eye iris information; and the pupil distance is calculated according to the human eye pupil position.
  • the position of the pupil of the human eye is extracted by extracting the iris information of the human eye, thereby obtaining the pupil distance of the human eye, which can facilitate the virtual reality playing device to adjust the position of the optical lens according to the eyelid distance of the human eye, thereby bringing the user a better position. A comfortable visual experience.
  • step S413 includes:
  • Step S4131 comparing the position of the human eye pupil with the preset pupil position
  • Step S4132 analyzing and determining the preset type of the pupil position of the human eye
  • Step S4133 calculating a lay length according to a preset type of the human eye pupil position by using a distance calculation method corresponding to the preset type.
  • the pupil of the left eye of some users is biased to the left, and the pupil of the right eye is biased to the right; the pupil of the left eye of some users is biased to the right, and the pupil of the right eye is biased to the left; Or the pupils of the left and right eyes are oriented toward the middle of the eyes, and so on.
  • the calculation method for the position of different pupils can be used to calculate the corresponding pupil distance.
  • the initial state left camera [102-1] image center coordinate is T1 (X1, Y1), right camera [102-2] image
  • the center coordinate is T2 (X2, Y2), and the pupil coordinate T1'(X1',Y1') in the P1 image and the pupil coordinate T2'(X2',Y2') in the P2 image are obtained, thereby obtaining the change of the abscissa of the left eye pupil.
  • the image coordinate information is converted into distance information, the left eye pupil relative change amount X left , and the right eye pupil relative change amount X right , calculated according to the following relationship
  • the relative change in the left and right eye pupils is X right .
  • a1 and a2 are respectively half of the image width of the left and right cameras.
  • the pupil distance
  • the third preset type the left eye pupil position is leftward , and the right eye pupil position is biased. left.
  • the analysis determines which preset type of the current human eye pupil position belongs to, and accordingly calculates an accurate pupil distance according to the preset type of the pupil distance calculation method, thereby controlling the optical lens 101 to move to an appropriate position. .
  • the position of the pupil of the human eye is compared with the position of the preset pupil; the preset type of the pupil position of the human eye is analyzed; and the preset type is adopted according to the preset type of the pupil position of the human eye.
  • the corresponding distance calculation method is used to calculate the lay length.
  • the present disclosure also provides a virtual reality playing device 100.
  • the virtual reality playing device 100 includes a movably disposed optical lens 101 located between a user's eyes and a screen of the playing device.
  • the spectroscope 103 is configured to capture a camera 102 that is reflected by the spectroscope 013 and is imaged by the spectroscopic lens, wherein the optical lens 101 can be in a front-rear direction away from or close to the human eye.
  • the virtual reality playback device 100 further includes a memory, a control processor, and a control program stored on the memory and operable on the control processor, the virtual reality
  • the device adaptive adjustment procedure is implemented by the processor 104 to implement the following method steps:
  • the optical lens is controlled to move to the focus position.
  • the control method of the virtual reality playing device in this embodiment can be applied to the virtual reality playing device 100.
  • the virtual reality playing device 100 is provided with an optical lens 101, and the optical lens 101 is two, corresponding to two left and right eyes respectively. In position, the optical lens 101 can be moved back and forth in a direction away from or near the position of the human eye under the control of the control system.
  • the virtual reality playing device 100 may further be provided with a camera 102.
  • the number of the cameras 102 is two, which are respectively disposed on the upper side or the lower side of the optical lens 101, respectively being left cameras [102- 1] and the right camera [102-2], the image is taken by the camera 102.
  • the camera 102 does not directly capture the human eye image, but acquires the image of the human eye imaged on the optical lens 101.
  • the imaging path of the camera 102 is as follows: Referring to FIG. 2, the camera 102 captures an image transmitted from the optical lens 101 and reflected to the beam splitter 103.
  • the camera 102 may also be provided.
  • the one camera 102 is disposed on the center line of the left optical lens 101 and the right optical lens 101, and the coordinate conversion method of the captured image may be according to the set camera 102. The position is adjusted accordingly.
  • the optical center of the image taken by the camera 102 is required to be consistent with the optical center of the optical lens 101 (ie, the left camera image center and the left side optical).
  • the center of the lens 101 is X1
  • the center of the right camera image and the center of the right optical lens are X2)
  • the coordinate dot is the center point of the left and right optical lenses
  • the images taken by the left and right cameras are P1 and P2, respectively
  • the angles of view of the left and right cameras are ⁇ 1.
  • the virtual reality playing device 100 may further include a driving circuit 1015.
  • the optical lens 101 may include a bracket (not shown) and a lens 1011 disposed on the bracket, and connected to the driving circuit 1015.
  • the conductive coil 1012 disposed near the lens 1011 and the N-pole magnet 1013 and the S-pole magnet 1014 respectively disposed at both ends of the optical lens 1011.
  • the conductive coil 1012 of the optical lens 101 and the N-pole magnet 1013 and the S-pole magnet 1014 respectively disposed at two ends of the optical lens 101 near the conductive coil 1012 may constitute a voice coil motor, and when the driving circuit 1015 gives the conductive coil When the 1012 is energized, the conductive coil 1012 will be subjected to the Lorentz force under the action of the magnetic field, and the optical lens 1011 will be moved back and forth under the action of the Lorentz force. It should be noted that both ends of the optical lens 101 refer to both ends in the up and down direction with reference to a normal human eye.
  • the working principle is that when the driving circuit 1015 supplies current to the conductive coil 1012, the direction of the force of the conductive coil 1012 can be obtained according to the Lorentz force received by the conductive coil 1012 in the magnetic field and the direction of the current, that is, according to The left-hand rule determines the force of the conductive coil 1012 in the magnetic field. Further, the force of the optical lens 1011 can be changed by changing the direction of the current supplied from the drive circuit 1015, and the distance moved by the lens 1011 can be controlled according to the magnitude of the current supplied from the drive circuit 1015.
  • the driving circuit 1015 energizes the wire coil 1012 such that the optical lens 101 moves back and forth in a direction away from or near the position of the human eye under the action of the Lorentz force, wherein the optical lens 101 is controlled to move back and forth.
  • the distance between the human eye and the optical lens 101 can be adjusted, which is equivalent to the action of focusing.
  • the camera 102 can capture an image at each position of the optical lens 101, and obtain the clearest picture among the plurality of pictures by contrast contrast analysis, thereby moving and locking the optical lens 101 at the most The location of the clear picture corresponds. In this way, the position of the optical lens 101 can be adjusted according to the difference in vision of the human eye to adapt to different visual acuity.
  • the optical lens 101 of the virtual reality playing device 100 can be automatically adjusted to the original position, which can facilitate re-acquiring the user's eye data when the different users use the next time.
  • the position of the optical lens 101 is adjusted.
  • the optical lens 101 can also move left and right under the control of the electric motor.
  • the left and right movement of the optical lens 101 actually finds the position most suitable for the user's pupil distance according to the pupil distance of the human eye.
  • whether the optical lens 101 is moved left and right or previously moved can be set as needed to finally move the optical lens 101 to the focus position.
  • the camera 102 can capture the human eye image when the optical lens 101 moves to different positions, since the image captured by the camera 102 is transmitted through the optical lens 101 and reflected.
  • the image on the spectroscope in the process of moving the optical lens 101 back and forth, is equivalent to adjusting the distance from the eye to the object (ie, the object distance), and when the optical lens 101 is moving, corresponding to different positions, the captured image
  • the sharpness of the image may be different, so whether the position of the optical lens 101 is the most visible position of the human eye can be judged directly based on the image sharpness. That is, the process in which the optical lens 101 moves back and forth actually adjusts the position of the optical lens 101 according to the diopter of the human eye.
  • the camera 102 can capture and save the image corresponding to each step of the voice coil motor, and can store the captured image in the storage device provided by the virtual reality playing device, or can shoot the image.
  • the image is stored in a cloud server connected to a virtual reality playback device or a mobile phone.
  • images with different sharpness can be analyzed according to the contrast contrast principle to get the clearest image.
  • the principle is as follows: as the optical lens 101 starts to move, the picture gradually becomes clear, and the contrast starts to rise; when the picture is the clearest and the contrast is the highest, it is already in the focus state, but the camera 102 does not know, so the optical lens 101 will continue to be moved.
  • the optical lens 101 When it is found that the contrast begins to decrease, the optical lens 101 is moved, and the contrast is lowered again.
  • the camera 102 knows that the focus has been missed; the optical lens 101 retreats to the position with the highest contrast, completes the focus, and finds the clearest image.
  • the most clear image obtained can be recognized by the human eye, and whether the eye image of the clearest image is detected, and when the image of the eye is imaged, the next step can be continued; when no image is captured When the human eye image is taken, the image can continue to be captured until the human eye image in the captured image is acquired.
  • the focus position of the optical lens 101 causes the optical lens 101 to reach its focus position by controlling the optical lens 101 to move back and forth and to move left and right, including but not limited to an axial focus position and a lateral focus position, the lateral focus position generally including but not It is limited to control the right and left movement of the optical lens 101 according to the pupil distance of the human eye, and the axial focus position is controlled by the back and forth movement of the optical lens 101 according to the human eye diopter.
  • the method of determining the axial focus position is: after obtaining the clearest human eye image, analyzing the moving optical lens 101 corresponding to the clearest human eye image formed by imaging the optical lens The position, in turn, determines the axial focus position of the optical lens 101.
  • the optical lens 101 is controlled to lock in the axial focus position after moving to the axial focus position.
  • the optical lens is moved back and forth in a direction away from or close to the position of the human eye in the virtual reality playing device; and the optical lens is obtained after being imaged by the optical lens in the process of moving back and forth.
  • Human eye image analyzing the acquired human eye image to obtain the clearest human eye image; determining the focus position of the optical lens according to the clearest human eye image; controlling the optical lens to move to the focus position.
  • the method of the present disclosure can adjust the position of the optical lens according to different visual acuity of the user, and bring a clear visual experience to different users.
  • the lateral focus position of the optical lens is determined based on the human eye data to accommodate the pupil distance of the human eye.
  • Different human eyes have deviations in the pupil position of the eye. For example, some people have asymmetry in the left and right eyes. Different human eyes may have the left eye center biased to the left or to the right, or the center of the right eye is biased to the left or to the right. There will be a deviation in the interpupillary distance, or the size of the eye will also cause a difference in the interpupillary distance, etc., and the optical lens 101 can be moved to the corresponding focus position according to the feature of the interpupillary distance.
  • acquiring human eye data according to the clearest human eye image may include acquiring iris information of the human eye from the clearest human eye image; and may also include obtaining the human eye through the clearest human eye image analysis.
  • the coordinate value of the leftmost end of the eye and the coordinate value of the rightmost end, and the coordinate values of the uppermost end and the lowermost end, the position of the center point is directly obtained by the above coordinate value, and the center point is defined as the position of the pupil of the human eye; of course, it may also include Whether or not the human eye has data such as astigmatism is obtained from the clearest human eye image, and the position of the optical lens 101 is adjusted based on the data.
  • the present embodiment acquires human eye data according to the clearest human eye image; and determines a lateral focus position of the optical lens based on the human eye data.
  • the pupil distance is calculated from the pupil position of the human eye.
  • the iris information of the human eye can be extracted from the acquired image of the human eye, and the position of the pupil of the human eye can be locked by extracting the iris information of the human eye, and the pupil position can be normalized, and the pupil position can be normalized to simplify the processing.
  • the process that is, directly considering the range of the peripheral edge region of the pupil, treats the pupil directly as a coordinate point for processing.
  • the pupil position can be converted to a point coordinate on the image taken by the camera.
  • the same processing can be used for the pupil positions of the left and right eyes, and the coordinate positions of the pupils of the left and right eyes on the image taken by the camera are obtained, and the pupil distance is calculated according to the coordinate position.
  • the calculation method of the pupil distance may also be different according to the characteristics of the human eye.
  • the movement of the optical lens 101 is controlled according to the obtained pupil distance, where the optical lens 101 can be moved substantially in the left-right direction with respect to the eye to accommodate users of different lay lengths.
  • the human eye iris information is extracted according to the human eye image, and the human eye pupil position is obtained according to the human eye iris information; and the pupil distance is calculated according to the human eye pupil position.
  • the position of the pupil of the human eye is extracted by extracting the iris information of the human eye, thereby obtaining the pupil distance of the human eye, which can facilitate the virtual reality playing device to adjust the position of the optical lens according to the eyelid distance of the human eye, thereby bringing the user a better position. A comfortable visual experience.
  • the analysis determines the preset type of the pupil position of the human eye
  • the pupil distance calculation method is used to calculate the pupil distance according to the preset distance type.
  • the pupil of the left eye of some users is biased to the left, and the pupil of the right eye is biased to the right; the pupil of the left eye of some users is biased to the right, and the pupil of the right eye is biased to the left; Or the pupils of the left and right eyes are oriented toward the middle of the eyes, and so on.
  • the calculation method for the position of different pupils can be used to calculate the corresponding pupil distance.
  • the initial state left camera [102-1] image center coordinate is T1 (X1, Y1), right camera [102-2] image
  • the center coordinate is T2 (X2, Y2), and the pupil coordinate T1'(X1',Y1') in the P1 image and the pupil coordinate T2'(X2',Y2') in the P2 image are obtained, thereby obtaining the change of the abscissa of the left eye pupil.
  • the image coordinate information is converted into distance information, the left eye pupil relative change amount X left , and the right eye pupil relative change amount X right , calculated according to the following relationship
  • the relative change in the left and right eye pupils is X right .
  • a1 and a2 are respectively half of the image width of the left and right cameras.
  • the pupil distance
  • the third preset type the left eye pupil position is leftward , and the right eye pupil position is biased. left.
  • the analysis determines which preset type of the current human eye pupil position belongs to, and accordingly calculates an accurate pupil distance according to the preset type of the pupil distance calculation method, thereby controlling the optical lens 101 to move to an appropriate position. .
  • the position of the pupil of the human eye is compared with the position of the preset pupil; the preset type of the pupil position of the human eye is analyzed; and the preset type is adopted according to the preset type of the pupil position of the human eye.
  • the corresponding distance calculation method is used to calculate the lay length.
  • an embodiment of the present disclosure further provides a computer readable storage medium, where the control program is stored, and when the control program is executed by the processor, the following operations are implemented:
  • the optical lens is controlled to move to the focus position.
  • the lateral focus position of the optical lens is determined based on the human eye data to accommodate the pupil distance of the human eye.
  • the pupil distance is calculated from the pupil position of the human eye.
  • the analysis determines the preset type of the pupil position of the human eye
  • the pupil distance calculation method is used to calculate the pupil distance according to the preset distance type.
  • the embodiment of the present disclosure further provides a computer readable storage medium storing computer executable instructions, which are implemented to implement the control method of the virtual reality playback device.
  • computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules or other data. Sex, removable and non-removable media.
  • Computer storage media include, but are not limited to, Random Access Memory (RAM), Read-Only Memory (ROM), and Electrically Erasable Programmable Read-only Memory (EEPROM). Flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical disc storage, magnetic cassette, magnetic tape, disk storage or other magnetic storage device, or Any other medium used to store the desired information and that can be accessed by the computer.
  • communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. .
  • the control method of the virtual reality playing device proposed by the present disclosure is: controlling the optical lens to move in the virtual reality playing device; acquiring a human eye image obtained by imaging the optical lens through the optical lens during moving; The human eye image determines a focus position of the optical lens; controlling the optical lens to move to the focus position.
  • the method of the present disclosure can adjust the position of the optical lens according to different eye features of the user, realize focusing of the optical lens, and bring a clear visual experience for different users.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

一种虚拟现实播放设备(100)的控制方法,包括:控制光学镜片(101)在虚拟现实播放设备(100)中移动(S10);获取光学镜片(101)在移动过程中经光学镜片(101)成像后得到的人眼图像(S20);根据人眼图像确定光学镜片(101)的聚焦位置(S30);控制光学镜片(101)移动至聚焦位置(S40)。

Description

虚拟现实播放设备及其控制方法及计算机可读存储介质 技术领域
本公开涉及但不限于播放领域,尤其是一种虚拟现实播放设备及其控制方法及计算机可读存储介质。
背景技术
VR眼镜即“虚拟现实播放设备”(Virtual Reality Glasses),VR眼镜通过手机作为显示屏,并通过手机上的相关应用来实现虚拟现实功能。虚拟现实播放设备涉及一种光学镜片、显示屏与人眼之间的交互关系,三者部件的光学中心位置只有在同一条直线上,才能达到最佳视觉效果。而光学镜片位于三者部件之间,因此调整光学镜片的位置很重要。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
市售的虚拟现实播放设备大多不支持光学镜片位置的调节,仅有很少一部分产品支持手动方式调节光学镜片的位置,而不能自动根据人眼视力的不同以及瞳距的差别来调节光学镜片的位置,以使光学镜片达到较佳位置给用户提供较佳的视觉享受。
本公开提出一种虚拟现实播放设备及其控制方法及计算机可读存储介质,能够使虚拟现实播放设备自动调节光学镜片以适应具有不同眼部特征的用户。
本公开提供一种虚拟现实播放设备的控制方法,所述虚拟现实播放设备包括可移动地设置的光学镜片,所述虚拟现实播放设备的控制方法包括如下步骤:
控制所述光学镜片在所述虚拟现实播放设备中移动;
获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图 像;
根据所述人眼图像确定所述光学镜片的聚焦位置;
控制所述光学镜片移动至所述聚焦位置。
在一种示例性实施方式中,所述获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像的步骤包括:
根据所获取的人眼图像采用对比反差原理分析得到最清晰的人眼图像;
所述根据所述人眼图像确定所述光学镜片的聚焦位置的步骤包括:根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置。
在一种示例性实施方式中,所述根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置的步骤包括:
根据所述最清晰的人眼图像,确定经由所述光学镜片成像形成的该最清晰的人眼图像时所对应的移动中的光学镜片的轴向位置,从而确定所述光学镜片的轴向聚焦位置以与人眼屈光度相适应。
在一种示例性实施方式中,所述根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置的步骤包括:
根据所述最清晰的人眼图像获取人眼数据;
根据所述人眼数据确定所述光学镜片的横向聚焦位置以与人眼的瞳距相适应。
在一种示例性实施方式中,所述根据所述最清晰的人眼图像获取人眼数据的步骤包括:
根据所述最清晰的人眼图像提取人眼虹膜信息;
根据所述人眼虹膜信息获取人眼瞳孔位置;
根据所述人眼瞳孔位置计算出瞳距。
在一种示例性实施方式中,所述根据所述人眼瞳孔位置计算出瞳距的步骤包括:
将所述人眼瞳孔位置与预设瞳孔位置进行比较;
分析得出所述人眼瞳孔位置的预设种类;
根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。
在一种示例性实施方式中,所述分析所获取的人眼图像得到最清晰的人眼图像的步骤包括:
根据所获取的人眼图像采用对比反差原理分析得到最清晰的图像。
在一种示例性实施方式中,所述根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置的步骤之前还包括:
判断所述最清晰的人眼图像中是否有人眼;
当有人眼时,根据所述最清晰的人眼图像获取人眼数据。
本公开还提供一种虚拟现实播放设备,所述虚拟现实播放设备包括可移动地设置的光学镜片、位于使用者眼睛与播放设备的屏幕之间的分光镜、设置为拍摄所述分光镜处反射过来的经由所述光学镜片成像后的人眼图像的摄像头;所述虚拟现实播放设备还包括存储器、控制处理器以及存储在所述存储器上并可在所述控制处理器上运行的控制程序,所述控制程序被所述处理器执行时实现如上所述的方法的步骤。
在一种示例性实施方式中,所述虚拟现实播放设备还包括驱动电路,所述光学镜片包括支架、设置在所述支架上的镜片、与所述驱动电路电连接的导电线圈以及分别设置于所述镜片的两端的N极磁铁与S极磁铁,所述导电线圈在所述N极磁铁与S极磁铁产生的磁场以及驱动电路传输至导电线圈的电流的共同作用下产生洛伦兹力进而前后移动。
本公开还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有控制程序,所述控制程序被控制处理器执行时实现如上所述的虚拟现实播放设备的控制方法的步骤。
本公开还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被执行时实现上述虚拟现实播放设备的控制方法。
本公开提出的虚拟现实播放设备的控制方法,通过控制所述光学镜片在所述虚拟现实播放设备中移动;获取所述光学镜片在移动过程中经所述光学 镜片成像后得到的人眼图像;根据所述人眼图像确定所述光学镜片的聚焦位置;控制所述光学镜片移动至所述聚焦位置。采用本公开的方法可以根据用户不同眼部特征来调整光学镜片的位置,实现光学镜片的聚焦,为不同用户带来较清晰的视觉体验。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本公开虚拟现实播放设备简化示意图;
图2为本公开虚拟现实播放设备的光路传输图;
图3为本公开虚拟现实播放设备拍摄的图像坐标转换成距离数据的转化图;
图4为本公开光学镜片的结构示意图;
图5为根据本公开第一实施例的虚拟现实播放设备的控制方法的流程示意图;
图6为根据本公开第二实施例的虚拟现实播放设备的控制方法的流程示意图;
图7为根据本公开第三实施例的虚拟现实播放设备的控制方法的流程示意图;
图8为根据本公开第四实施例的虚拟现实播放设备的控制方法的流程示意图;
图9为本公开虚拟现实播放设备的示意图。
本公开的较佳实施方式
下面结合附图对本公开的实施方式进行描述。
请一并参考图1至图5,图1为本公开一实施例中的虚拟现实播放设备100的结构示意图,图5为本公开一实施例中提供的一种虚拟现实播放设备的控制方法,所述虚拟现实播放设备100包括光学镜片101。
在第一实施例中,所述虚拟现实播放设备的控制方法包括:
步骤S10,控制所述光学镜片在所述虚拟现实播放设备中移动;
本实施例中的虚拟现实播放设备的控制方法可应用在虚拟现实播放设备100中,该虚拟现实播放设备100中可移动地设置有光学镜片101,所述光学镜片101为两个,分别对应左右眼两个位置,该光学镜片101可在控制***的控制下沿远离或者靠近人眼的位置的方向前后移动以及左右移动。同时,所述虚拟现实播放设备100中还可设置有摄像头102,所述摄像头102的数量可为两个,分别设置于所述光学镜片101的上侧或者是下侧,分别为左摄像头【102-1】和右摄像头【102-2】,通过所述摄像头102拍摄图像。这里值得说明的是,该摄像头102并不是直接拍摄人眼图像,而是获取人眼在光学镜片101上成像后的图像。其中摄像头102的拍摄路径为:参照图2,摄像头102拍摄从光学镜片101传输过来的并反射到分光镜103的图像。
当然,在其他实施例中,所述摄像头102也可以设置一个,该一个摄像头102设置在左侧光学镜片101以及右侧光学镜片101的中线上,拍摄图像的坐标换算方法可根据设置的摄像头102的位置进行相应调整。
可选地,在设计所述虚拟现实播放设备100时,参照图1和图3,可以要求所述摄像头102拍摄图像光学中心与光学镜片101光学中心横坐标一致(即左摄像头图像中心与左侧光学镜片101中心为X1,右侧摄像头图像中心与右侧光学镜片中心为X2),坐标圆点为左右光学镜片中心点,左右摄像头拍摄的图像分别为P1、P2,左右摄像头视场角分别为α1与α2,物距为D1与D2。
参照图4,所述虚拟现实播放设备100还可包括驱动电路1015,所述光学镜片101可包括支架(图未示)以及设置在所述支架上的镜片1011、与所述驱动电路1015电连接的导电线圈1012以及分别设置于所述光学镜片1011的两端的N极磁铁1013与S极磁铁1014。所述光学镜片101的导电线圈1012以及分别设置在光学镜片101的两端靠近所述导电线圈1012的N极磁铁1013和S极磁铁1014可组成音圈马达,当驱动电路1015给所述导电线圈1012通电时,导电线圈1012在磁场的作用下,将受到洛伦兹力,在洛伦兹力的作用下带动光学镜片1011前后移动。值得说明的是,光学镜片101的两端指的是以正常人眼作为参考的上下方向的两端。其工作原理是当所述驱动电路1015对所述导电线圈1012提供电流时,根据导电线圈1012在磁场中受到的洛伦兹力以及电流的方向可以得出导电线圈1012的受力方向,即根据左 手定则判断导电线圈1012在磁场中的受力情况。进而可通过改变驱动电路1015提供的电流方向来改变光学镜片1011的受力情况,并且可以根据驱动电路1015提供的电流的大小控制镜片1011移动的距离。
本实施例中,驱动电路1015对导线线圈1012通电,使得所述光学镜片101在受到洛伦兹力的作用下沿远离或者靠近人眼的位置的方向进行前后移动,其中控制光学镜片101前后移动,可以实现调整人眼到光学镜片101的距离,相当于调焦的动作。当光学镜片101前后移动时,摄像头102可拍摄到光学镜片101每个位置上的图像,并通过对比反差分析法得到多张图片中最清晰的图片,进而将光学镜片101移动并锁定在该最清晰的图片对应的位置处。以此可根据人眼视力的不同调整光学镜片101的位置,适应不同视力的人群。值得说明的是,当用户在使用完虚拟现实播放设备100时,虚拟现实播放设备100的光学镜片101可自动调整到原始位置,能够方便下次不同用户使用时重新获取用户的眼部数据,重新调整光学镜片101的位置。
同时,所述光学镜片101还可在电动马达的控制下左右移动,光学镜片101的左右移动实际是根据人眼的瞳距来找到与用户的瞳距最相适合的位置。可选地,所述光学镜片101是先左右移动还是先前后移动可根据需要进行设定,以最终能将所述光学镜片101移动到聚焦位置为准。
步骤S20,获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像;
当光学镜片101在音圈马达的控制下实现前后移动时,摄像头102可拍摄到光学镜片101移动到不同位置时的人眼图像,由于摄像头102拍摄到的图像是经过光学镜片101传输过来并反射到分光镜上的图像,在光学镜片101前后移动的过程中,相当于调整眼睛到物体的距离(即物距),当光学镜片101在移动的过程中,对应不同的位置,拍摄到的图像的清晰度可以是不一样的,因此可直接根据图像清晰度的情况来判断光学镜片101的位置是否是人眼能看得最清楚的位置。也即光学镜片101前后移动的过程,实际上是根据人眼不同的屈光度来调整光学镜片101的位置。
其中,当光学镜片101前后移动时,摄像头102可以拍摄音圈马达每个步进对应的图像并保存,可以将拍摄的图像存储到虚拟现实播放设备自带的存储装置中,也可以将拍摄的图像存储到与虚拟现实播放设备或者是手机相 连的云端服务器中。根据不同位置得到的图像的清晰度不同,对清晰度不同的图像可根据对比反差原理进行分析以得到最清晰的图像。其原理为:随着光学镜片101开始移动,画面逐渐清晰,对比度开始上升;当画面最清晰,对比度最高时,其实已经处于合焦状态,但摄像头102并不知道,所以会继续移动光学镜片101,当发现对比度开始下降时,移动光学镜片101,发现对比度又下降,摄像头102知道已经错过焦点;光学镜片101回退至对比度最高的位置,完成对焦,找到最清晰的那一张图像。
获取到最清晰的图像后,可对所获得的最清晰的图像进行人眼识别,检测所述最清晰的图像中是否有人眼图像,当有人眼图像时,可继续下一步;当没有拍摄到人眼图像时,可继续拍摄图像直至获取到所拍摄的图像中有人眼图像。
步骤S30,根据所述人眼图像确定所述光学镜片的聚焦位置;
步骤S40,控制所述光学镜片移动至所述聚焦位置;
本实施例通过控制所述光学镜片在所述虚拟现实播放设备中沿远离或者靠近人眼的位置的方向前后移动;获取所述光学镜片在前后移动的过程中经所述光学镜片成像后得到的人眼图像;分析所获取的人眼图像得到最清晰的人眼图像;根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置;控制所述光学镜片移动至所述聚焦位置。采用本公开的方法可以根据用户不同眼部特征来调整光学镜片的位置,实现光学镜片的聚焦,为不同用户带来较清晰的视觉体验。
可选地,所述光学镜片101的聚焦位置通过控制光学镜片101前后移动以及左右移动来使光学镜片101达到其聚焦位置,包括但不限于轴向聚焦位置和横向聚焦位置,该横向聚焦位置大致可包括但不限于根据人眼的瞳距来控制光学镜片101的左右移动达到,该轴向聚焦位置则可以是根据人眼屈光度来控制光学镜片101的前后移动达到。
其中,该轴向聚焦位置的确定的方法可包括:在获取到最清晰的人眼图像后,分析得到经由所述光学镜片成像形成的该最清晰的人眼图像时所对应的移动中的光学镜片101的位置,进而确定所述光学镜片101的轴向聚焦位置。控制所述光学镜片101移动到该轴向聚焦位置后将其锁定在该轴向聚焦位置。
可选地,请参阅图6,基于本公开第一实施例提供的虚拟现实播放设备的控制方法,在本公开第二实施例提供的虚拟现实播放设备的控制方法中,上述步骤S40包括:
步骤S41,根据所述最清晰的人眼图像获取人眼数据;
步骤S42,根据所述人眼数据确定所述光学镜片的横向聚焦位置以与人眼的瞳距相适应。
不同人眼其瞳孔位置有偏差,如有些人的左右眼不对称,不同人眼可能存在左眼中心向左偏或者向右偏,或者右眼中心向左偏或者向右偏,这些都将导致瞳距存在偏差,或者眼睛的大小不一也将导致瞳距存在差异等等,可根据瞳距的特征调整光学镜片101移动至相应的聚焦位置。
可选地,根据所述最清晰的人眼图像获取人眼数据,可以包括从该最清晰的人眼图像中获取人眼的虹膜信息;也可以包括通过该最清晰的人眼图像分析得到人眼最左端的坐标值以及最右端的坐标值,以及最上端与最下端的坐标值,通过上述坐标值直接得到中心点的位置,并将该中心点定义为人眼瞳孔的位置;当然还可以包括从该最清晰的人眼图像中获取人眼是否存在散光等的数据,从而可根据这些数据调整光学镜片101的位置。
本实施例通过根据所述最清晰的人眼图像获取人眼数据;根据所述人眼数据确定所述光学镜片的横向聚焦位置。通过从所述最清晰的人眼图像中获取相应的人眼数据,进而根据使用者不同的人眼数据相应的调整光学镜片横向聚焦位置,以为不同的用户提供较舒适的虚拟现实播放设备体验,提升了用户体验。
可选地,请参阅图7,基于本公开第二实施例提供的虚拟现实播放设备的控制方法,在本公开第三实施例提供的虚拟现实播放设备的控制方法中,上述步骤S41包括:
步骤S411,根据所述人眼图像提取人眼虹膜信息;
步骤S412,根据所述人眼虹膜信息获取人眼瞳孔位置;
步骤S413,根据所述人眼瞳孔位置计算出瞳距。
本实施例中,可从获取到的人眼图像中提取人眼虹膜信息,通过提取人 眼虹膜信息进而锁定人眼瞳孔位置,并且将瞳孔位置归一化,将瞳孔位置归一化能够简化处理过程,即在不考虑瞳孔周边边缘区域的范围的前提下直接将瞳孔看成一个坐标点来进行处理。并可将该瞳孔位置转换到摄像头拍摄的图像上的某点坐标。对左右两眼的瞳孔位置均可采用相同的处理,得到左右两眼的瞳孔在摄像头拍摄的图像上的坐标位置,并可根据该坐标位置计算得到瞳距。根据人眼特征的不同,瞳距的计算方法也可以不相同。最后可根据得到的瞳距控制光学镜片101的移动,此处光学镜片101可大致在相对眼睛的左右方向上移动,以此适应不同瞳距的用户。
本实施例根据所述人眼图像提取人眼虹膜信息,并根据所述人眼虹膜信息获取人眼瞳孔位置;根据所述人眼瞳孔位置计算出瞳距。本实施例采用提取人眼虹膜信息的方法提取到人眼瞳孔的位置,进而得出人眼的瞳距,可以方便虚拟现实播放设备根据人眼瞳距调节光学镜片的位置,为用户带来较舒适的视觉体验。
可选地,请参阅图8,基于本公开第三实施例提供的虚拟现实播放设备的控制方法,在本公开第四实施例提供的虚拟现实播放设备的控制方法中,上述步骤S413包括:
步骤S4131,将所述人眼瞳孔位置与预设瞳孔位置进行比较;
步骤S4132,分析得出所述人眼瞳孔位置的预设种类;
步骤S4133,根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。
由于不同的用户其眼部特征不一样,例如,某些用户的左眼的瞳孔偏向左,右眼的瞳孔偏向右;某些用户的左眼的瞳孔偏向右,右眼的瞳孔偏向左;又或者左眼和右眼的瞳孔均朝向两眼中间的位置偏等等。基于人眼瞳孔位置的偏差,可采用针对不同瞳孔的位置的计算方法去计算相应的瞳距。
可选地,参见图3,对左右摄像头拍照的图像进行瞳孔位置的坐标数据分析,初始状态左摄像头【102-1】图像中心坐标为T1(X1,Y1),右摄像头【102-2】图像中心坐标为T2(X2,Y2),获取P1图像中瞳孔坐标T1’(X1’,Y1’),P2图像中瞳孔坐标T2’(X2’,Y2’),从而得到左眼睛瞳孔横坐标变化量为|X1-X1’|与|X2-X2’|,将图像坐标信息转成距离信息,左眼瞳孔相对变 化量X ,右眼瞳孔相对变化量X ,根据如下的关系式计算得到左眼瞳孔相对变化量X 和右眼瞳孔相对变化量X
Figure PCTCN2018072136-appb-000001
Figure PCTCN2018072136-appb-000002
其中a1、a2分别为左右摄像头图像宽度的一半。
当X1>X1’,X2<X2’时,得出瞳距=|X1|+|X2|+X +X ;第一种预设种类,左眼瞳孔位置偏左,右眼瞳孔位置偏右。
当X1<X1’,X2<X2’时,得出瞳距=|X1|+|X2|-X +X ;第二种预设种类,左眼瞳孔位置偏右,右眼瞳孔位置偏右。
当X1>X1’,X2>X2’时,得出瞳距=|X1|+|X2|+X -X ;第三种预设种类,左眼瞳孔位置偏左,右眼瞳孔位置偏左。
当X1>X1’,X2<X2’时,得出瞳距=|X1|+|X2|-X -X ;第四种预设种类,左眼瞳孔位置偏左,右眼瞳孔位置偏右。
分析得到所获得的当前的人眼瞳孔位置属于哪一种预设种类,相应地可根据该种预设种类的瞳距计算方法计算得到准确的瞳距,进而控制光学镜片101移动至适当的位置。
本实施例,通过将所述人眼瞳孔位置与预设瞳孔位置进行比较;分析得出所述人眼瞳孔位置的预设种类;根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。通过根据不同的瞳孔位置采用不同的计算方法计算出瞳距,使得瞳距的计算结果更加精确,同时提高了根据瞳距控制光学镜片的位置的准确性。
本公开还提供一种虚拟现实播放设备100,参照图9,在一实施例中,所述虚拟现实播放设备100包括可移动地设置的光学镜片101、位于使用者眼睛与播放设备的屏幕之间的分光镜103、设置为拍摄所述分光镜013处反射过来的经由所述分光镜片成像后的人眼图像的摄像头102,其中,所述光学镜片101可在相对人眼远离或者靠近的前后方向、相对人眼的平行的左右方向移动;所述虚拟现实播放设备100还包括存储器、控制处理器以及存储在所述存储器上并可在所述控制处理器上运行的控制程序,所述虚拟现实设备自适应调节程序被所述处理器104执行时实现如下方法步骤:
控制所述光学镜片在所述虚拟现实播放设备中移动;
获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像;
根据所述人眼图像确定所述光学镜片的聚焦位置;
控制所述光学镜片移动至所述聚焦位置。
本实施例中的虚拟现实播放设备的控制方法可应用在虚拟现实播放设备100中,该虚拟现实播放设备100中设置有光学镜片101,所述光学镜片101为两个,分别对应左右眼两个位置,该光学镜片101可在控制***的控制下沿远离或者靠近人眼的位置的方向前后移动。同时,所述虚拟现实播放设备100中还可设置有摄像头102,所述摄像头102的数量为两个,分别设置于所述光学镜片101的上侧或者是下侧,分别为左摄像头【102-1】和右摄像头【102-2】,通过所述摄像头102拍摄图像,这里值得说明的是,该摄像头102并不是直接拍摄人眼图像,而是获取人眼在光学镜片101上成像后的图像。其中摄像头102的拍摄路径为:参照图2,摄像头102拍摄从光学镜片101传输过来的并反射到分光镜103的图像。
当然,在其他实施例中,所述摄像头102也可以设置一个,该一个摄像头102设置在左侧光学镜片101以及右侧光学镜片101的中线上,拍摄图像的坐标换算方法可根据设置的摄像头102的位置进行相应调整。
可选地,在设计所述虚拟现实播放设备100时,参照图1和图3,要求所述摄像头102拍摄图像光学中心与光学镜片101光学中心横坐标一致(即左摄像头图像中心与左侧光学镜片101中心为X1,右侧摄像头图像中心与右侧光学镜片中心为X2),坐标圆点为左右光学镜片中心点,左右摄像头拍摄的图像分别为P1、P2,左右摄像头视场角分别为α1与α2,物距D1与D2。
参照图4,所述虚拟现实播放设备100还可包括驱动电路1015,所述光学镜片101可包括支架(图未示)以及设置在所述支架上的镜片1011、与所述驱动电路1015相连并靠近所述镜片1011设置的导电线圈1012以及分别设置于所述光学镜片1011的两端的N极磁铁1013与S极磁铁1014。所述光学镜片101的导电线圈1012以及分别设置在光学镜片101的两端靠近所述导电线圈1012的N极磁铁1013和S极磁铁1014可组成音圈马达,当驱动电路1015给所述导电线圈1012通电时,导电线圈1012在磁场的作用下,将受 到洛伦兹力,在洛伦兹力的作用下带动光学镜片1011前后移动。值得说明的是,光学镜片101的两端指的是以正常人眼作为参考的上下方向的两端。其工作原理是当所述驱动电路1015对所述导电线圈1012提供电流时,根据导电线圈1012在磁场中受到的洛伦兹力以及电流的方向可以得出导电线圈1012的受力方向,即根据左手定则判断导电线圈1012在磁场中的受力情况。进而可通过改变驱动电路1015提供的电流方向来改变光学镜片1011的受力情况,并且可以根据驱动电路1015提供的电流的大小控制镜片1011移动的距离。
本实施例中,驱动电路1015对导线线圈1012通电,使得所述光学镜片101在受到洛伦兹力的作用下沿远离或者靠近人眼的位置的方向进行前后移动,其中控制光学镜片101前后移动,可以实现调整人眼到光学镜片101的距离,相当于调焦的动作。当光学镜片101前后移动时,摄像头102可拍摄到光学镜片101每个位置上的图像,并通过对比反差分析法得到多张图片中最清晰的图片,进而将光学镜片101移动并锁定在该最清晰的图片对应的位置处。以此可根据人眼视力的不同调整光学镜片101的位置,适应不同视力的人群。值得说明的是,当用户在使用完虚拟现实播放设备100时,虚拟现实播放设备100的光学镜片101可自动调整到原始位置,能够方便下次不同用户使用时重新获取用户的眼部数据,重新调整光学镜片101的位置。
同时,所述光学镜片101还可在电动马达的控制下左右移动,光学镜片101的左右移动实际是根据人眼的瞳距来找到与用户的瞳距最相适合的位置。可选地,所述光学镜片101是先左右移动还是先前后移动可根据需要进行设定,以最终能将所述光学镜片101移动到聚焦位置为准。
当光学镜片101在音圈马达的控制下实现前后移动时,摄像头102可拍摄到光学镜片101移动到不同位置时的人眼图像,由于摄像头102拍摄到的图像是经过光学镜片101传输过来并反射到分光镜上的图像,在光学镜片101前后移动的过程中,相当于调整眼睛到物体的距离(即物距),当光学镜片101在移动的过程中,对应不同的位置,拍摄到的图像的清晰度可以是不一样的,因此可直接根据图像清晰度的情况来判断光学镜片101的位置是否是人眼能看得最清楚的位置。也即光学镜片101前后移动的过程,实际上是根据人眼不同的屈光度来调整光学镜片101的位置。
其中,当光学镜片101前后移动时,摄像头102可以拍摄音圈马达每个步进对应的图像并保存,可以将拍摄的图像存储到虚拟现实播放设备自带的存储装置中,也可以将拍摄的图像存储到与虚拟现实播放设备或者是手机相连的云端服务器中。根据不同位置得到的图像的清晰度不同,对清晰度不同的图像可根据对比反差原理进行分析以得到最清晰的图像。其原理为:随着光学镜片101开始移动,画面逐渐清晰,对比度开始上升;当画面最清晰,对比度最高时,其实已经处于合焦状态,但摄像头102并不知道,所以会继续移动光学镜片101,当发现对比度开始下降时,移动光学镜片101,发现对比度又下降,摄像头102知道已经错过焦点;光学镜片101回退至对比度最高的位置,完成对焦,找到最清晰的那一张图像。
获取到最清晰的图像后,可以对所获得的最清晰的图像进行人眼识别,检测所述最清晰的图像中是否有人眼图像,当有人眼图像时,可继续下一步;当没有拍摄到人眼图像时,可继续拍摄图像直至获取到所拍摄的图像中有人眼图像。
其中所述光学镜片101的聚焦位置通过控制光学镜片101前后移动以及左右移动来使光学镜片101达到其聚焦位置,包括但不限于轴向聚焦位置和横向聚焦位置,该横向聚焦位置大致包括但不限于根据人眼的瞳距来控制光学镜片101的的左右移动达到,该轴向聚焦位置则是根据人眼屈光度来控制光学镜片101的前后移动达到。
其中,轴向聚焦位置的确定的方法为:在获取到最清晰的人眼图像后,分析得到经由所述光学镜片成像形成的该最清晰的人眼图像时所对应的移动中的光学镜片101的位置,进而确定所述光学镜片101的轴向聚焦位置。控制所述光学镜片101移动到该轴向聚焦位置后将其锁定在该轴向聚焦位置。
本实施例通过控制所述光学镜片在所述虚拟现实播放设备中沿远离或者靠近人眼的位置的方向前后移动;获取所述光学镜片在前后移动的过程中经所述光学镜片成像后得到的人眼图像;分析所获取的人眼图像得到最清晰的人眼图像;根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置;控制所述光学镜片移动至所述聚焦位置。采用本公开的方法可以根据用户不同视力来调整光学镜片的位置,为不同用户带来较清晰的视觉体验。
可选地,在一实施例中,所述虚拟现实设备自适应调节程序被所述处理器执行时实现如下方法步骤:
根据所述最清晰的人眼图像获取人眼数据;
根据所述人眼数据确定所述光学镜片的横向聚焦位置以与人眼的瞳距相适应。
不同人眼其眼部瞳孔位置有偏差,如有些人的左右眼不对称,不同人眼可能存在左眼中心向左偏或者向右偏,或者右眼中心向左偏或者向右偏,这些都将导致瞳距存在偏差,或者眼睛的大小不一也将导致瞳距存在差异等等,可根据瞳距的特征调整光学镜片101移动至相应的聚焦位置。
可选地,根据所述最清晰的人眼图像获取人眼数据,可以包括从该最清晰的人眼图像中获取人眼的虹膜信息;也可以包括通过该最清晰的人眼图像分析得到人眼最左端的坐标值以及最右端的坐标值,以及最上端与最下端的坐标值,通过上述坐标值直接得到中心点的位置,并将该中心点定义为人眼瞳孔的位置;当然还可以包括从该最清晰的人眼图像中获取人眼是否存在散光等的数据,根据这些数据调整光学镜片101的位置。
本实施例通过根据所述最清晰的人眼图像获取人眼数据;根据所述人眼数据确定所述光学镜片的横向聚焦位置。通过从所述最清晰的人眼图像中获取相应的人眼数据,进而根据使用者不同的人眼数据相应的调整光学镜片横向聚焦位置,以为不同的用户提供较舒适的虚拟现实播放设备体验,提升了用户体验。
可选地,在一实施例中,所述虚拟现实设备自适应调节程序被所述处理器执行时实现如下方法步骤:
根据所述人眼图像提取人眼虹膜信息;
根据所述人眼虹膜信息获取人眼瞳孔位置;
根据所述人眼瞳孔位置计算出瞳距。
本实施例中,可从获取到的人眼图像中提取人眼虹膜信息,通过提取人眼虹膜信息进而锁定人眼瞳孔位置,并且将瞳孔位置归一化,将瞳孔位置归一化能够简化处理过程,即在不考虑瞳孔周边边缘区域的范围的前提下直接将瞳孔看成一个坐标点来进行处理。并可将该瞳孔位置转换到摄像头拍摄的 图像上的某点坐标。对左右两眼的瞳孔位置均可采用相同的处理,得到左右两眼的瞳孔在摄像头拍摄的图像上的坐标位置,并根据该坐标位置计算得到瞳距。根据人眼特征的不同,瞳距的计算方法也可以不相同。最后根据得到的瞳距控制光学镜片101的移动,此处光学镜片101可大致在相对眼睛的左右方向上移动,以此适应不同瞳距的用户。
本实施例根据所述人眼图像提取人眼虹膜信息,并根据所述人眼虹膜信息获取人眼瞳孔位置;根据所述人眼瞳孔位置计算出瞳距。本实施例采用提取人眼虹膜信息的方法提取到人眼瞳孔的位置,进而得出人眼的瞳距,可以方便虚拟现实播放设备根据人眼瞳距调节光学镜片的位置,为用户带来较舒适的视觉体验。
可选地,在一实施例中,所述虚拟现实设备自适应调节程序被所述处理器执行时实现如下方法步骤:
将所述人眼瞳孔位置与预设瞳孔位置进行比较;
分析得出所述人眼瞳孔位置的预设种类;
根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。
由于不同的用户其眼部特征不一样,例如,某些用户的左眼的瞳孔偏向左,右眼的瞳孔偏向右;某些用户的左眼的瞳孔偏向右,右眼的瞳孔偏向左;又或者左眼和右眼的瞳孔均朝向两眼中间的位置偏等等。基于人眼瞳孔位置的偏差,可采用针对不同瞳孔的位置的计算方法去计算相应的瞳距。
可选地,参见图3,对左右摄像头拍照的图像进行瞳孔位置的坐标数据分析,初始状态左摄像头【102-1】图像中心坐标为T1(X1,Y1),右摄像头【102-2】图像中心坐标为T2(X2,Y2),获取P1图像中瞳孔坐标T1’(X1’,Y1’),P2图像中瞳孔坐标T2’(X2’,Y2’),从而得到左眼睛瞳孔横坐标变化量为|X1-X1’|与|X2-X2’|,将图像坐标信息转成距离信息,左眼瞳孔相对变化量X ,右眼瞳孔相对变化量X ,根据如下的关系式计算得到左眼瞳孔相对变化量X 和右眼瞳孔相对变化量X
Figure PCTCN2018072136-appb-000003
Figure PCTCN2018072136-appb-000004
其中a1、a2分别为左右摄像头图像宽度的一半。
当X1>X1’,X2<X2’时,得出瞳距=|X1|+|X2|+X +X ;第一种预设种类,左眼瞳孔位置偏左,右眼瞳孔位置偏右。
当X1<X1’,X2<X2’时,得出瞳距=|X1|+|X2|-X +X ;第二种预设种类,左眼瞳孔位置偏右,右眼瞳孔位置偏右。
当X1>X1’,X2>X2’时,得出瞳距=|X1|+|X2|+X -X ;第三种预设种类,左眼瞳孔位置偏左,右眼瞳孔位置偏左。
当X1>X1’,X2<X2’时,得出瞳距=|X1|+|X2|-X -X ;第四种预设种类,左眼瞳孔位置偏左,右眼瞳孔位置偏右。
分析得到所获得的当前的人眼瞳孔位置属于哪一种预设种类,相应地可根据该种预设种类的瞳距计算方法计算得到准确的瞳距,进而控制光学镜片101移动至适当的位置。
本实施例,通过将所述人眼瞳孔位置与预设瞳孔位置进行比较;分析得出所述人眼瞳孔位置的预设种类;根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。通过根据不同的瞳孔位置采用不同的计算方法计算出瞳距,使得瞳距的计算结果更加精确,同时提高了根据瞳距控制光学镜片的位置的准确性。
此外,本公开实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有控制程序,所述控制程序被处理器执行时实现如下操作:
控制所述光学镜片在所述虚拟现实播放设备中移动;
获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像;
根据所述人眼图像确定所述光学镜片的聚焦位置;
控制所述光学镜片移动至所述聚焦位置。
可选地,所控制程序被处理器执行时实现如下操作:
根据所述最清晰的人眼图像获取人眼数据;
根据所述人眼数据确定所述光学镜片的横向聚焦位置以与人眼的瞳距相适应。
可选地,所控制程序被处理器执行时实现如下操作:
根据所述人眼图像提取人眼虹膜信息;
根据所述人眼虹膜信息获取人眼瞳孔位置;
根据所述人眼瞳孔位置计算出瞳距。
可选地,所虚拟现实播放设备自适应调节程序被处理器执行时实现如下操作:
将所述人眼瞳孔位置与预设瞳孔位置进行比较;
分析得出所述人眼瞳孔位置的预设种类;
根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。
本公开实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被执行时实现上述虚拟现实播放设备的控制方法。
值得说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本公开实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,本公开的技术方案本质上或者说对本领域做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开不同实施例所述的方法。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、***、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当 的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于随机存取存储器(RAM,Random Access Memory)、只读存储器(ROM,Read-Only Memory)、电可擦除只读存储器(EEPROM,Electrically Erasable Programmable Read-only Memory)、闪存或其他存储器技术、光盘只读存储器(CD-ROM,Compact Disc Read-Only Memory)、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
本领域的普通技术人员可以理解,可以对本公开的技术方案进行修改或者等同替换,而不脱离本公开技术方案的精神和范围,均应涵盖在本公开的权利要求范围当中。
工业实用性
本公开提出的虚拟现实播放设备的控制方法,通过控制所述光学镜片在所述虚拟现实播放设备中移动;获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像;根据所述人眼图像确定所述光学镜片的聚焦位置;控制所述光学镜片移动至所述聚焦位置。采用本公开的方法可以根据用户不同眼部特征来调整光学镜片的位置,实现光学镜片的聚焦,为不同用户带来较清晰的视觉体验。

Claims (10)

  1. 一种虚拟现实播放设备的控制方法,所述虚拟现实播放设备包括可移动设置的光学镜片,所述虚拟现实播放设备的控制方法包括如下步骤:
    控制所述光学镜片在所述虚拟现实播放设备中移动;
    获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像;
    根据所述人眼图像确定所述光学镜片的聚焦位置;
    控制所述光学镜片移动至所述聚焦位置。
  2. 根据权利要求1所述的虚拟现实播放设备的控制方法,其中,所述获取所述光学镜片在移动过程中经所述光学镜片成像后得到的人眼图像的步骤包括:根据所获取的人眼图像采用对比反差原理分析得到最清晰的人眼图像;
    所述根据所述人眼图像确定所述光学镜片的聚焦位置的步骤包括:根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置。
  3. 根据权利要求2所述的虚拟现实播放设备的控制方法,其中,所述根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置的步骤包括:
    根据所述最清晰的人眼图像,确定经由所述光学镜片成像形成的该最清晰的人眼图像时所对应的移动中的光学镜片的轴向位置,从而确定所述光学镜片的轴向聚焦位置以与人眼屈光度相适应。
  4. 根据权利要求2所述的虚拟现实播放设备的控制方法,其中,所述根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置的步骤包括:
    根据所述最清晰的人眼图像获取人眼数据;
    根据所述人眼数据确定所述光学镜片的横向聚焦位置以与人眼的瞳距相适应。
  5. 根据权利要求4所述的虚拟现实播放设备的控制方法,其中,所述根据所述最清晰的人眼图像获取人眼数据的步骤包括:
    根据所述最清晰的人眼图像提取人眼虹膜信息;
    根据所述人眼虹膜信息获取人眼瞳孔位置;
    根据所述人眼瞳孔位置计算出瞳距。
  6. 根据权利要求5所述的虚拟现实播放设备的控制方法,其中,所述根据所述人眼瞳孔位置计算出瞳距的步骤包括:
    将所述人眼瞳孔位置与预设瞳孔位置进行比较;
    分析得出所述人眼瞳孔位置的预设种类;
    根据所述人眼瞳孔位置的预设种类采用与所述预设种类相应的瞳距计算方法计算出瞳距。
  7. 根据权利要求1至6中任一项所述的虚拟现实播放设备的控制方法,所述根据所述最清晰的人眼图像确定所述光学镜片的聚焦位置的步骤之前还包括:
    判断所述最清晰的人眼图像中是否有人眼;
    当有人眼时,根据所述最清晰的人眼图像获取人眼数据。
  8. 一种虚拟现实播放设备,所述虚拟现实播放设备包括可移动设置的光学镜片、位于使用者眼睛与播放设备的屏幕之间的分光镜、设置为拍摄所述分光镜处反射过来的经由所述光学镜片成像后的人眼图像的摄像头;所述虚拟现实播放设备还包括存储器、控制处理器以及存储在所述存储器上并可在所述控制处理器上运行的控制程序,所述控制程序被所述处理器执行时实现如权利要求1至7中任一项所述的方法的步骤。
  9. 根据权利要求8所述的虚拟现实播放设备,其中,
    所述虚拟现实播放设备还包括驱动电路,所述光学镜片包括支架、设置在所述支架上的镜片、与所述驱动电路电连接的导电线圈以及分别设置于所述镜片的两端的N极磁铁与S极磁铁,所述导电线圈在所述N极磁铁与S极磁铁产生的磁场以及驱动电路传输至导电线圈的电流的共同作用下产生洛伦兹力进而前后移动。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有控制程序,所述控制程序被控制处理器执行时实现如权利要求1至7所述的虚拟现实播放设备的控制方法的步骤。
PCT/CN2018/072136 2017-04-24 2018-01-10 虚拟现实播放设备及其控制方法及计算机可读存储介质 WO2018196444A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710273633.3 2017-04-24
CN201710273633.3A CN108732750A (zh) 2017-04-24 2017-04-24 虚拟现实播放设备及其控制方法及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2018196444A1 true WO2018196444A1 (zh) 2018-11-01

Family

ID=63918744

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/072136 WO2018196444A1 (zh) 2017-04-24 2018-01-10 虚拟现实播放设备及其控制方法及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108732750A (zh)
WO (1) WO2018196444A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109856802B (zh) * 2019-04-17 2021-08-31 京东方科技集团股份有限公司 瞳距调节方法、装置和虚拟显示设备
CN113359270B (zh) * 2021-05-25 2023-06-09 歌尔股份有限公司 头戴设备的屈光度调节方法及屈光度调节***

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011113062A1 (en) * 2010-03-12 2011-09-15 Viking Systems, Inc. Stereoscopic visualization system
US20120140037A1 (en) * 2010-12-06 2012-06-07 Lensvector, Inc. Motionless adaptive stereoscopic scene capture with tuneable liquid crystal lenses and stereoscopic auto-focusing methods
CN103353667A (zh) * 2013-06-28 2013-10-16 北京智谷睿拓技术服务有限公司 成像调整设备及方法
CN104407437A (zh) * 2014-10-20 2015-03-11 深圳市亿思达科技集团有限公司 变焦头戴设备
CN105929534A (zh) * 2015-10-26 2016-09-07 北京蚁视科技有限公司 一种屈光度自适应头戴式显示装置
CN106358034A (zh) * 2016-10-19 2017-01-25 深圳市麦极客图像技术有限公司 录制、观看vr视频的装置、设备及vr视频录放***
CN106488099A (zh) * 2016-10-19 2017-03-08 深圳市麦极客图像技术有限公司 录制、观看vr视频的装置、设备及vr视频录放***

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103149690B (zh) * 2013-03-01 2016-03-02 南京理工大学 一种3d头戴显示器
US20160143527A1 (en) * 2014-11-20 2016-05-26 Gn Otometrics A/S Head mountable device for measuring eye movement having visible projection means
CN105068249A (zh) * 2015-08-03 2015-11-18 众景视界(北京)科技有限公司 全息智能眼镜
CN106054386A (zh) * 2016-06-25 2016-10-26 深圳市虚拟现实科技有限公司 一种自适应近眼显示装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011113062A1 (en) * 2010-03-12 2011-09-15 Viking Systems, Inc. Stereoscopic visualization system
US20120140037A1 (en) * 2010-12-06 2012-06-07 Lensvector, Inc. Motionless adaptive stereoscopic scene capture with tuneable liquid crystal lenses and stereoscopic auto-focusing methods
CN103353667A (zh) * 2013-06-28 2013-10-16 北京智谷睿拓技术服务有限公司 成像调整设备及方法
CN104407437A (zh) * 2014-10-20 2015-03-11 深圳市亿思达科技集团有限公司 变焦头戴设备
CN105929534A (zh) * 2015-10-26 2016-09-07 北京蚁视科技有限公司 一种屈光度自适应头戴式显示装置
CN106358034A (zh) * 2016-10-19 2017-01-25 深圳市麦极客图像技术有限公司 录制、观看vr视频的装置、设备及vr视频录放***
CN106488099A (zh) * 2016-10-19 2017-03-08 深圳市麦极客图像技术有限公司 录制、观看vr视频的装置、设备及vr视频录放***

Also Published As

Publication number Publication date
CN108732750A (zh) 2018-11-02

Similar Documents

Publication Publication Date Title
US9998650B2 (en) Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map
CN108496350B (zh) 一种对焦处理方法及设备
US9521311B2 (en) Quick automatic focusing method and image acquisition apparatus
KR20190015573A (ko) 시선 추적에 기초하여 자동 초점 조정하는 이미지 포착 시스템, 장치 및 방법
US10191276B2 (en) Imaging adjustment device and imaging adjustment method
CN108076278B (zh) 一种自动对焦方法、装置及电子设备
CN109725418B (zh) 显示设备、用于调整显示设备的图像呈现的方法及装置
US10261345B2 (en) Imaging adjustment device and imaging adjustment method
WO2017107596A1 (zh) 一种终端及终端拍摄的方法、计算机存储介质
US20140376813A1 (en) Subject detection device and control method for the same, imaging apparatus, and storage medium
JP5814692B2 (ja) 撮像装置及びその制御方法、プログラム
US20100171815A1 (en) Image data obtaining method and apparatus therefor
US20140085189A1 (en) Line-of-sight detection apparatus, line-of-sight detection method, and program therefor
TW201541141A (zh) 使用多鏡頭的自動對焦系統及其方法
JP7081599B2 (ja) 情報処理装置、情報処理方法、およびプログラム
US11650660B2 (en) Electronic device, control method, and non-transitory computer readable medium
JP2017049426A (ja) 位相差推定装置、位相差推定方法及び位相差推定プログラム
KR101740728B1 (ko) 영상 데이터 출력 시스템과 이를 위한 영상 출력 장치
WO2018196444A1 (zh) 虚拟现实播放设备及其控制方法及计算机可读存储介质
KR102319437B1 (ko) 거리 결정 방법 및 디바이스
JP6645711B2 (ja) 画像処理装置、画像処理方法、プログラム
JP2012182738A (ja) ステレオ画像撮像装置
WO2016203844A1 (ja) 情報処理装置、情報処理方法、およびプログラム
US20160065941A1 (en) Three-dimensional image capturing apparatus and storage medium storing three-dimensional image capturing program
US20160198084A1 (en) Image pickup apparatus, operation support method, and medium recording operation support program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790611

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18790611

Country of ref document: EP

Kind code of ref document: A1