CN115512092A - VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium - Google Patents

VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium Download PDF

Info

Publication number
CN115512092A
CN115512092A CN202211282237.4A CN202211282237A CN115512092A CN 115512092 A CN115512092 A CN 115512092A CN 202211282237 A CN202211282237 A CN 202211282237A CN 115512092 A CN115512092 A CN 115512092A
Authority
CN
China
Prior art keywords
range
glasses
pupil
visual
visual focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211282237.4A
Other languages
Chinese (zh)
Inventor
潘玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tianqi Times Technology Co ltd
Original Assignee
Shenzhen Tianqi Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tianqi Times Technology Co ltd filed Critical Shenzhen Tianqi Times Technology Co ltd
Priority to CN202211282237.4A priority Critical patent/CN115512092A/en
Publication of CN115512092A publication Critical patent/CN115512092A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The application relates to the field of VR glasses, in particular to a VR glasses 3D anti-dizziness method, system, equipment and computer readable storage medium. Wherein the method comprises the following steps: acquiring an eye image in real time; analyzing the focusing range of human eyes based on the obtained human eye image; judging the position of a visual focus based on the focusing range of human eyes; based on the visual focus position, the playing frame is calibrated. The method has the advantages that the focus of the picture is moved along with the movement of the visual focus of the user, so that the comfort level is improved, and the vertigo feeling is reduced.

Description

VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium
Technical Field
The application relates to the field of VR glasses, in particular to a VR glasses 3D anti-dizziness method, system, equipment and computer readable storage medium.
Background
With the progress of science and technology, VR glasses are produced, and mainly after a user wears the VR glasses, 2D plane images are converted into 3D stereo images, so that people have a feeling of being personally on the scene when watching movies, playing games and the like. When the existing VR glasses are worn by audiences to watch a 3D movie, the phenomenon of dizziness is easy to appear, and one reason is that the reality and the deficiency do not correspond. The center of the picture of a common movie is the focus when shooting, the picture near the focus is clear, and the left picture and the right picture can be superposed to see depth information. The edge of the picture is blurred, when the visual center of a viewer moves to the edge, the picture is not clear along with refocusing of eyeballs, and focusing fails frequently, which causes a dizzy feeling.
Therefore, the prior art needs to be improved based on the above problems.
Disclosure of Invention
The application aims to provide a VR glasses 3D anti-dizziness method, and aims to solve the technical problem that dizziness is felt when 3D is watched due to the fact that virtual and real are not corresponding.
The technical purpose of the application is realized by the following technical scheme:
a VR glasses 3D anti-glare method, the method comprising:
acquiring an eye image in real time;
analyzing the focusing range of human eyes based on the obtained human eye image;
judging the position of a visual focus based on the focusing range of human eyes;
based on the visual focus position, the playing frame is calibrated.
By adopting the technical scheme, the human eye image is acquired in real time and used for analyzing the human eye focusing range, the data acquired in real time is closest to the requirements of the current user, the occurrence of time delay conditions is favorably reduced, and the experience of the user can be improved. The human eye focusing range is used for judging the position of the visual focus, so that the sharpened picture area is favorably reduced, and the accuracy is improved. By calibrating the play screen based on the visual focus position, the focus of the display screen can be moved as the visual focus of the user moves, thereby eliminating the feeling of vertigo when viewing.
The present invention in a preferred example may be further configured to: analyzing the human eye focus range based on the acquired human eye image comprises:
acquiring pupil parameters including pupil constriction, pupil dilation and interpupillary distance; acquiring a visual range of a user;
dividing a plurality of image areas based on a user visual range;
acquiring an image area of sight focusing corresponding to eyeballs;
and obtaining the visual focus position based on the image area of the eye focus corresponding to the eyeball and the pupil parameter.
By adopting the technical scheme, the pupil parameters are acquired to judge the visual range or the visual area of the user, the change of the visual focus of the user to a far place can be judged through pupil contraction, the change of the visual focus of the user to a near place can be judged through pupil enlargement, the position of the eyeball can be regionalized through regional division, and the eyeball can rotate to have a target point to be searched. The pupil can be judged to be in horizontal focus through the directional movement of the pupil, and the visual focus of the current user can be conjectured by combining the position of the pupil relative to the eyeball and the positions of the eyeball and the pupil.
The invention in a preferred example may be further configured to: based on the visual focus position, calibrating the play frame comprises:
acquiring a visual focus position area range;
and based on the visual focus preset area range, sharpening the picture in the area.
By adopting the technical scheme, the visual focus position area range is obtained and is used for judging the range of the picture to be sharpened, the range accuracy is improved, the cushion is laid for the sharpening and the blurring area division, the sharpening is favorable for improving the definition of the picture watched by a user, and the visual experience is improved. By sharpening the range of the preset visual focus area, the vertigo feeling can be reduced to a certain extent by comparing the sharpened area with the unsharpened area outside the range of the preset visual focus area.
The present invention in a preferred example may be further configured to: based on the visual focus position, calibrating the play frame further comprises:
blurring the playing picture outside the range of the visual focus position area.
By adopting the technical scheme, the played picture outside the visual focus position area range is subjected to blurring treatment, and is combined with the sharpening of the visual focus preset area range, so that the visual focus position area range is kept clear, the virtual-real correspondence of the picture is improved, the depth information is seen by overlapping the picture, and the vertigo caused by the fact that the virtual and real are not corresponding when the 3D film is watched is favorably reduced.
The invention in a preferred example may be further configured to: blurring the playing pictures outside the range of the visual focus position area comprises:
obtaining a critical value of a visual focus position area range;
starting from the critical value of the visual focus position area range, the playing picture is controlled to gradually blur towards the direction far away from the central point.
By adopting the technical scheme, the critical value of the visual focus position area range is obtained and is used for judging the boundary position of sharpening and blurring, and the position of the picture edge is subjected to fuzzy processing. By controlling the gradual change and the blur of the played picture in the direction far away from the central point, the visual experience of the user can be enhanced, the comparison of sharpening and blurring of the user is reduced, and the effects of the sense of the experience decline caused by the abrupt picture and the small picture are reduced.
The invention in a preferred example may be further configured to: acquiring pupil parameters including pupil constriction and pupil dilation, further comprising:
acquiring the interpupillary distance of a current user;
comparing with the through hole space in the system;
if the current user interpupillary distance is within the interpupillary distance range in the system, calling a playing picture calibration parameter corresponding to the interpupillary distance range in the system;
and if the current user pupil distance is not in the system, after the playing picture is calibrated, storing the calibration parameters after corresponding to the current user pupil distance.
By adopting the technical scheme, the inter-pupillary distance of the current user is compared with the inter-pupillary distance in the system, so that the judgment of whether the inter-pupillary distance is the inter-pupillary distance in the system is facilitated, and recording and storage can be performed if the inter-pupillary distance is not in the system. The VR glasses are endowed with a memory function, so that the parameters can be conveniently called, the process of checking the picture by the user at each time is greatly reduced, the checking time is saved, the integral experience of the user is favorably improved, and the market acceptance is conveniently improved.
The second purpose of the application is to provide a VR glasses 3D anti-glare system, a VR glasses 3D anti-glare system.
The second application object of the present application is achieved by the following technical scheme: the system comprises:
the acquisition module is used for acquiring human eye images in real time;
the analysis module is used for analyzing the focusing range of human eyes based on the acquired human eye image;
the judging module is used for judging the position of the visual focus based on the focusing range of human eyes;
and the calibration module is used for calibrating the playing picture based on the visual focus position.
By adopting the technical scheme, the human eye image is acquired in real time and used for analyzing the human eye focusing range, the data acquired in real time is closest to the requirements of the current user, the occurrence of time delay conditions is favorably reduced, and the experience of the user can be improved. The human eye focusing range is used for judging the position of the visual focus, so that the sharpened picture area is favorably reduced, and the accuracy is improved. By calibrating the play screen based on the visual focus position, the focus of the display screen can be moved as the visual focus of the user moves, thereby eliminating the feeling of vertigo when viewing.
The present invention in a preferred example may be further configured to: the analysis module comprises:
the parameter acquisition sub-module is used for acquiring pupil parameters, wherein the pupil parameters comprise pupil contraction, pupil enlargement and interpupillary distance;
the visual range acquisition submodule is used for acquiring the visual range of the user;
the dividing submodule is used for dividing a plurality of image areas based on the visual range of the user;
the image area acquisition sub-module is used for acquiring an image area of the sight focus corresponding to the eyeballs;
and the position output sub-module is used for obtaining the visual focus position based on the image area of the sight focus corresponding to the eyeballs and the pupil parameters.
By adopting the technical scheme, the pupil parameters are acquired to judge the visual range or the visual area of the user, the change of the visual focus of the user to a far place can be judged through pupil contraction, the change of the visual focus of the user to a near place can be judged through pupil enlargement, the position of the eyeball can be regionalized through regional division, and the eyeball can rotate to have a target point to be searched. The pupil can be judged to be in horizontal focusing through the directional movement of the pupil, and the visual focus of the current user can be conjectured according to the position of the pupil relative to the eyeball and the positions of the eyeball and the pupil.
The third purpose of this application provides a VR glasses 3D anti-dazzle dizzy equipment:
the third objective of the present application is achieved by the following technical solutions: a VR glasses 3D anti-dizziness device comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and executes the VR glasses 3D anti-dizziness method.
The fourth purpose of the present application is to provide a computer-readable storage medium.
The fourth application purpose of the present application is achieved by the following technical solutions:
a computer readable storage medium, wherein a computer program capable of being loaded by a processor and executing the VR glasses 3D anti-dizziness method is stored.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the human eye image is acquired in real time to be used for analyzing the human eye focusing range, and the data acquired in real time is closest to the requirements of the current user, so that the occurrence of delay conditions is favorably reduced, and the experience of the user can be improved. The human eye focusing range is used for judging the position of the visual focus, so that the sharpened picture area is favorably reduced, and the accuracy is improved. By calibrating the play screen based on the visual focus position, the focus of the display screen can be moved along with the movement of the visual focus of the user, thereby eliminating the dizzy feeling during watching.
2. Pupil parameters are acquired to judge the visual range or visual area of a user, the change of the visual focus of the user to a far place can be judged through pupil contraction, the change of the visual focus of the user to a near place can be judged through pupil enlargement, the division of the areas is favorable for regionalizing the position of an eyeball, and the eyeball rotates to have a target point to be searched. The pupil can be judged to be in horizontal focusing through the directional movement of the pupil, and the visual focus of the current user can be conjectured according to the position of the pupil relative to the eyeball and the positions of the eyeball and the pupil.
3. The comparison between the current user interpupillary distance and the interpupillary distance in the system is helpful for judging whether the current user interpupillary distance is the interpupillary distance in the system, and recording and storing can be carried out if the current user interpupillary distance is not the interpupillary distance in the system. The VR glasses are endowed with a memory function, so that the parameters can be conveniently called, the process of checking the picture by the user at each time is greatly reduced, the checking time is saved, the integral experience of the user is favorably improved, and the market acceptance is conveniently improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a VR glasses 3D anti-glare method according to the present application.
Fig. 2 is a block diagram of a structure of a VR glasses 3D anti-glare system of the present application.
Reference numerals: 1. an acquisition module; 2. an analysis module; 3. a judgment module; 4. and (5) calibrating the module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
The embodiment of the application provides a 3D anti-dizziness method for VR glasses, and with reference to FIG. 1, the main flow of the method is described as follows:
s1, acquiring an eye image in real time;
the frames and video contents presented by the VR glasses are changed in real time, and the images of human eyes need to be acquired in real time in cooperation with the change of the frames, so that the conditions of frame delay and frame discontinuity are reduced as much as possible, and the best experience is provided for customers.
S2, analyzing the focusing range of human eyes based on the obtained human eye image;
the human eye images and the acquisition can be performed through one or more built-in cameras in the VR to shoot pictures, and other modes capable of recording the human eye images can be used. The obtained human eye image is generally mainly used for shooting a binocular picture of a user, or a mode of obtaining two eyes by using a video is used for analyzing the focusing range of human eyes, so that the region watched by the human eyes can be known.
S3, judging the position of a visual focus based on the focusing range of human eyes;
the focusing of human eyes is roughly divided into horizontal, vertical and near-far dimensions, and an image obtained when the focal point of the eyeball of a person is changed can be found. When the horizontal focusing is carried out, the pupil is moved up, down, left and right, and the visual focus position of the current viewer is conjectured according to the relative positions of the pupil to the eyeball and the positions of the pupil of the left eyeball and the pupil of the right eyeball. When focusing far and near, the pupil is realized by contraction and enlargement, the image of the eyeball is shot, and the distance information of the visual focus of the current viewer can be estimated through the size of the pupil by combining the image identification technology. The human eye focusing range is a range value, the specific value depends on the pupil parameter of each person, and the visual focus position can be judged through the human eye focusing range.
And S4, calibrating the playing picture based on the visual focus position.
In order to ensure the best viewing experience, the calibration needs to be performed according to the difference of each viewer. The method is realized by playing different calibration pictures, a guide viewer can be prompted in the pictures to focus on different objects in the upper, lower, left, right, front and back directions, and meanwhile, the eyeball images of the viewer are captured, so that the focus change of the visual calibration display pictures of different viewers can be realized, from the visual perception perspective, the collected area images are clear in outline, and the images outside the area are fuzzy and approximate; therefore, it is possible to present an image in which the focal point (clear point) can be changed according to the eyeball focal position by sharpening and blurring the image of the corresponding region based on the estimated eyeball focal position.
Specifically, in some possible embodiments, analyzing the human eye focus range based on the acquired human eye image includes:
acquiring pupil parameters including pupil constriction, pupil dilation and interpupillary distance;
acquiring a visual range of a user;
dividing a plurality of image areas based on the visual range of the user;
acquiring an image area of sight focusing corresponding to eyeballs;
and obtaining the visual focus position based on the image area of the eye focus corresponding to the eyeball and the pupil parameter.
The pupil parameter is obtained to judge the visual range of the user, pupil contraction and pupil enlargement are relative, the size of the pupil of the user under normal conditions is obtained as an initial value, and the changed state of the pupil is compared with the initial state to obtain whether the pupil is in a contracted state or a pupil enlargement state. The interpupillary distance refers to the distance between the left and right pupils of a user when an image of the eyes is obtained, the interpupillary distance of each person is different, and the corresponding visual range and the visual focus position of the user are different. The user visual range is a playing picture range visible to the user, and the user visual range is divided into a plurality of image areas for narrowing the range focused by the eyeball sight and improving the accuracy of the area focused by the eyeball sight. And finally, obtaining the position of the visual focus through the image area focused by the sight line and the pupil parameters.
Specifically, in some possible embodiments, calibrating the play frame based on the visual focus position includes:
acquiring a visual focus position area range;
and sharpening the picture in the area based on the visual focus preset area range.
The visual focus position area range is a range divided based on the visual focus position, the range is divided, the picture in the area is sharpened, the range is accurate, the sharpening and the blurring area division are paved, the visual focus position area range is sharpened, the sharpening is favorable for improving the definition of the picture watched by a user, and the visual experience is improved. By sharpening the range of the preset visual focus area, compared with an unsharpened area outside the range of the preset visual focus area, the playing picture can move along with the movement of the focus, so that the sharpened area also moves along with the movement of the focus, and the vertigo feeling can be reduced to a certain extent.
Specifically, in some possible embodiments, calibrating the play frame further includes, based on the visual focus position:
blurring the playing picture outside the range of the visual focus position area.
The blurring is to deepen the depth of field to facilitate focus at a sharpened position, the blurring is to play a picture outside a visual focus position area range, the blurring and sharpening correspond to each other, so that the dizzy feeling caused by using glasses by a user is greatly reduced, the blurring is specifically performed through blurring processing, the mosaic is also used, the mosaic is gradually changed, and the like, and the method is not limited in the application.
Specifically, in some possible embodiments, blurring the playing frames outside the range of the visual focus position area includes:
obtaining a critical value of a visual focus position area range;
starting from the critical value of the visual focus position area range, and controlling the played picture to gradually blur towards the direction far away from the central point.
The critical value of the visual focus position area range is used for distinguishing a sharpening area and a blurring area, sharpening is carried out in the visual focus position area range, the specific sharpening area is formed by blurring from the critical value of the visual focus position area range, the blurring degree of a played picture is controlled to be higher and higher in a direction far away from a central point, an adaptive range is given to a user through a gradual blurring mode, the user cannot be caused to be uncomfortable, and the dizziness caused when the user uses glasses is greatly reduced.
Specifically, in some possible embodiments, acquiring pupil parameters, where the pupil parameters include pupil constriction and pupil dilation, further includes:
acquiring the interpupillary distance of a current user;
comparing with the through hole space in the system;
if the current user interpupillary distance is within the interpupillary distance range in the system, calling a playing picture calibration parameter corresponding to the interpupillary distance range in the system;
and if the current user interpupillary distance is not in the system, after the playing picture is calibrated, storing the calibration parameters after corresponding to the current user interpupillary distance.
The method is mainly used for recording and storing data of different distances between pupils, and can directly call original proofreading data when the same distance between pupils is met, thereby reducing the operation steps to the maximum extent, reducing the vertigo feeling and improving the experience feeling of users. For example, the interpupillary distance of the user is about 54mm, only the interpupillary distance of 58mm-66mm exists in the system, at this time, the proofreading of the playing picture is started, and after the proofreading is finished, the data of the interpupillary distance of the current user and the corresponding proofreading parameter of this time are corresponded and stored, so that when the interpupillary distance of the subsequent user is about 54mm, the corresponding calibration parameter can be called.
In another embodiment of the present application, a VR glasses 3D anti-glare system is disclosed, the system comprising:
the acquisition module is used for acquiring human eye images in real time;
the analysis module is used for analyzing the focusing range of human eyes based on the acquired human eye image;
the judging module is used for judging the position of the visual focus based on the focusing range of human eyes;
and the calibration module is used for calibrating the playing picture based on the visual focus position.
The VR glasses 3D anti-dizziness system provided by this embodiment can achieve the same technical effects as the foregoing embodiments because of the functions of the modules themselves and the logical connections between the modules, and for the principle analysis, reference may be made to the related description of the method steps of the VR glasses 3D anti-dizziness system, and no more description is made here.
Specifically, in some possible embodiments, the analysis module includes:
the parameter acquisition sub-module is used for acquiring pupil parameters, wherein the pupil parameters comprise pupil contraction, pupil enlargement and interpupillary distance;
the visual range acquisition sub-module is used for acquiring the visual range of the user;
the dividing submodule is used for dividing a plurality of image areas based on the visual range of the user;
the image area acquisition sub-module is used for acquiring an image area of the sight focus corresponding to the eyeballs;
and the visual focus position acquisition sub-module is used for obtaining the visual focus position based on the image area of the sight focus corresponding to the eyeballs and the pupil parameters.
The VR glasses 3D anti-dizziness system provided in this embodiment can achieve the same technical effects as the foregoing embodiments because of the functions of the sub-modules and the logical connections between the sub-modules, and for the principle analysis, reference may be made to the related description of the method steps of the VR glasses 3D anti-dizziness system, which will not be repeated herein.
Specifically, in some possible embodiments, the calibration module includes:
the range acquisition submodule is used for acquiring a visual focus position area range;
and the sharpening submodule is used for sharpening the picture in the area based on the visual focus preset area range.
The VR glasses 3D anti-dizziness system provided in this embodiment can achieve the same technical effects as the foregoing embodiments because of the functions of the sub-modules and the logical connections between the sub-modules, and for the principle analysis, reference may be made to the related description of the method steps of the VR glasses 3D anti-dizziness system, which will not be repeated herein.
Specifically, in some possible embodiments, the calibration module further includes:
and the blurring submodule is used for blurring the playing picture outside the range of the visual focus position area.
In the VR glasses 3D anti-glare system provided by this embodiment, due to the functions of the sub-modules themselves and the logical connection between the sub-modules, the steps of the foregoing embodiment can be implemented, so that the same technical effects as those of the foregoing embodiment can be achieved.
Specifically, in some possible embodiments, the blurring sub-module includes:
a critical value obtaining unit for obtaining a critical value of the visual focus position area range;
and the fuzzy control unit is used for controlling the gradient of the played picture to be fuzzy from the critical value of the visual focus position area range to the direction far away from the central point.
The VR glasses 3D anti-dizziness system provided by this embodiment can achieve the same technical effects as the foregoing embodiments because of the functions of the units and the logical connections between the units, and the principle analysis can be referred to the related description of the method steps of the VR glasses 3D anti-dizziness system, which will not be described herein again.
Specifically, in some possible embodiments, the parameter obtaining sub-module further includes:
the distance acquisition unit is used for acquiring the pupillary distance of the current user;
the comparison unit is used for comparing the distance between the through holes in the system;
the calling unit is used for calling the calibration parameters of the playing picture corresponding to the interpupillary distance range in the system when the interpupillary distance of the current user is within the interpupillary distance range in the system;
and the corresponding storage unit is used for storing the calibration parameters after corresponding to the interpupillary distance of the current user after the playing picture is calibrated when the interpupillary distance of the current user is not in the system.
The VR glasses 3D anti-dizziness system provided by this embodiment can achieve the same technical effects as the foregoing embodiments because of the functions of the units and the logical connections between the units, and the principle analysis can be referred to the related description of the method steps of the VR glasses 3D anti-dizziness system, which will not be described herein again.
The embodiment of the application also provides a VR glasses 3D anti-dizziness device, which comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and can execute the VR glasses 3D anti-dizziness method.
The embodiment of the application also provides a computer readable storage medium, wherein a computer program capable of being loaded by a processor and executing the VR glasses 3D anti-dizziness method is stored.
The readable storage medium provided by this embodiment may achieve the same technical effects as the foregoing embodiment because the computer program in the readable storage medium is loaded and executed on the processor to implement the steps of the foregoing embodiment, and for principle analysis, reference may be made to the related description of the foregoing method steps, which will not be described herein again.
The computer-readable storage medium includes, for example: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "a first", "a second" or "a feature defined as" a first "," a second "may explicitly or implicitly include at least one of the feature. In the description of the present invention, "plurality" means at least two, e.g., two, three, etc., unless explicitly defined otherwise, for descriptive purposes only and not for indicating or implying relative importance or implicitly indicating the number of technical features indicated.
Thus, any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention.
The embodiments of the present invention are preferred embodiments of the present application, and the scope of protection of the present application is not limited by the embodiments, so: equivalent changes in structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (10)

1. A VR glasses 3D anti-glare method, comprising:
acquiring an eye image in real time;
analyzing the focusing range of human eyes based on the obtained human eye image;
judging the position of a visual focus based on the focusing range of human eyes;
based on the visual focus position, the playing frame is calibrated.
2. The VR glasses 3D anti-glare method of claim 1, wherein analyzing a human eye focus range based on the acquired human eye image comprises:
acquiring pupil parameters including pupil constriction, pupil dilation and interpupillary distance;
acquiring a visual range of a user;
dividing a plurality of image areas based on a user visual range;
acquiring an image area of sight focusing corresponding to eyeballs;
and obtaining the visual focus position based on the image area of the eye focus corresponding to the eyeball and the pupil parameter.
3. The VR glasses 3D anti-glare method of claim 1, wherein based on the visual focus position, calibrating the play frame comprises:
acquiring a visual focus position area range;
and based on the visual focus preset area range, sharpening the picture in the area.
4. The VR glasses 3D anti-glare method of claim 1, wherein calibrating the play frame based on the visual focus position further comprises:
blurring the playing picture outside the range of the visual focus position area.
5. The VR glasses 3D anti-glare method of claim 4, wherein blurring the play frames outside of the range of visual focus position areas comprises:
obtaining a critical value of a visual focus position area range;
starting from the critical value of the visual focus position area range, the playing picture is controlled to gradually blur towards the direction far away from the central point.
6. The VR glasses 3D anti-glare method of claim 2, wherein obtaining pupil parameters, the pupil parameters including pupil constriction and pupil dilation further comprises:
acquiring the interpupillary distance of a current user;
comparing with the through hole space in the system;
if the current user interpupillary distance is within the interpupillary distance range in the system, calling a playing picture calibration parameter corresponding to the interpupillary distance range in the system;
and if the current user interpupillary distance is not in the system, after the playing picture is calibrated, storing the calibration parameters after corresponding to the current user interpupillary distance.
7. A VR glasses 3D anti-glare system, the system comprising:
the acquisition module is used for acquiring human eye images in real time;
the analysis module is used for analyzing the focusing range of human eyes based on the acquired human eye image;
the judging module is used for judging the position of the visual focus based on the focusing range of human eyes;
and the calibration module is used for calibrating the playing picture based on the visual focus position.
8. The VR glasses 3D anti-glare system of claim 7, wherein the analysis module includes:
the parameter acquisition sub-module is used for acquiring pupil parameters, wherein the pupil parameters comprise pupil contraction, pupil enlargement and interpupillary distance;
the visual range acquisition sub-module is used for acquiring the visual range of the user;
the dividing submodule is used for dividing a plurality of image areas based on the visual range of the user;
the image area acquisition sub-module is used for acquiring an image area of sight focusing corresponding to eyeballs;
and the position output sub-module is used for obtaining the visual focus position based on the image area of the sight focus corresponding to the eyeballs and the pupil parameters.
9. A VR glasses 3D anti-glare device comprising a memory and a processor, the memory storing a computer program that can be loaded by the processor and execute a VR glasses 3D anti-glare method of any of claims 1-7.
10. A computer-readable storage medium storing a computer program that can be loaded by a processor and executed to perform a VR glasses 3D anti-vignetting method as claimed in any of claims 1 to 7.
CN202211282237.4A 2022-10-19 2022-10-19 VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium Pending CN115512092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211282237.4A CN115512092A (en) 2022-10-19 2022-10-19 VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211282237.4A CN115512092A (en) 2022-10-19 2022-10-19 VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115512092A true CN115512092A (en) 2022-12-23

Family

ID=84510467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211282237.4A Pending CN115512092A (en) 2022-10-19 2022-10-19 VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115512092A (en)

Similar Documents

Publication Publication Date Title
CN109901710B (en) Media file processing method and device, storage medium and terminal
US8913790B2 (en) System and method for analyzing three-dimensional (3D) media content
EP2362670B1 (en) Method and apparatus for processing three-dimensional images
US20050190180A1 (en) Stereoscopic display system with flexible rendering of disparity map according to the stereoscopic fusing capability of the observer
US10846927B2 (en) Method and apparatus for displaying a bullet-style comment in a virtual reality system
CN105894567B (en) Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
US20150363905A1 (en) Improvements in and relating to image making
JP2004221700A (en) Stereoscopic image processing method and apparatus
EP2072004A1 (en) Method of simulating blur in digitally processed images
CN108124509B (en) Image display method, wearable intelligent device and storage medium
US10885651B2 (en) Information processing method, wearable electronic device, and processing apparatus and system
US20050270284A1 (en) Parallax scanning through scene object position manipulation
US9167223B2 (en) Stereoscopic video processing device and method, and program
US20110316984A1 (en) Adaptive adjustment of depth cues in a stereo telepresence system
KR102066058B1 (en) Method and device for correcting distortion errors due to accommodation effect in stereoscopic display
JP2004007395A (en) Stereoscopic image processing method and device
JP2002223458A (en) Stereoscopic video image generator
JP2023515205A (en) Display method, device, terminal device and computer program
CN106851249A (en) Image processing method and display device
JP2004221699A (en) Stereoscopic image processing method and apparatus
KR20200128661A (en) Apparatus and method for generating a view image
CN111164542A (en) Method of modifying an image on a computing device
JP2004220127A (en) Stereoscopic image processing method and device
CN115512092A (en) VR (virtual reality) glasses 3D anti-dizziness method, system, equipment and computer readable storage medium
CN117412020A (en) Parallax adjustment method, parallax adjustment device, storage medium and computing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination