CN107329265A - The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser - Google Patents

The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser Download PDF

Info

Publication number
CN107329265A
CN107329265A CN201710544194.5A CN201710544194A CN107329265A CN 107329265 A CN107329265 A CN 107329265A CN 201710544194 A CN201710544194 A CN 201710544194A CN 107329265 A CN107329265 A CN 107329265A
Authority
CN
China
Prior art keywords
depth
field
implementing helmet
virtual implementing
observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710544194.5A
Other languages
Chinese (zh)
Inventor
党少军
姜燕冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Virtual Reality Technology Co Ltd
Original Assignee
Shenzhen Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Virtual Reality Technology Co Ltd filed Critical Shenzhen Virtual Reality Technology Co Ltd
Publication of CN107329265A publication Critical patent/CN107329265A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • G01M11/0257Testing optical properties by measuring geometrical properties or aberrations by analyzing the image formed by the object to be tested
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0149Head-up displays characterised by mechanical features
    • G02B2027/0161Head-up displays characterised by mechanical features characterised by the relative positioning of the constitutive elements
    • G02B2027/0163Electric or electronic control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Eyeglasses (AREA)

Abstract

The present invention provides the method and device that a kind of virtual implementing helmet interpupillary distance optimizes with depth of field laser, including test cell, observation unit, elementary area and processing unit, the test cell includes virtual implementing helmet to be placed, fixed structure, the virtual implementing helmet to be placed includes display screen, the fixed structure includes clamping device and position-limit mechanism, and the clamping device, which can be opened, is put into the virtual implementing helmet.Compared with prior art, the present invention effectively simply solves the problem of depth of field is set using test cell, observation unit, the combination of elementary area and processing unit.Observation unit is moved along eyepiece track motion by motor belt motor, can facilitate from multiple angles from carrying out, to facilitate the setting of multiple points of observation.

Description

The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser
Technical field
The present invention relates to field of virtual reality, more specifically to a kind of virtual implementing helmet interpupillary distance and depth of field laser The method and device of optimization.
Background technology
Distortion eyeglass has application in many fields, for example, in virtual reality system, in order to allow user visually to gather around There is real feeling of immersion, virtual reality device will cover the visual range of human eye as far as possible, therefore be accomplished by virtually existing Real equipment fills a specific sphere radian eyeglass, but when traditional image is projected using Arc lenses in the eye of people, Image is distortion, and human eye just has no idea to obtain the positioning in Virtual Space, i.e., your periphery is all to turn round in virtual reality Bent image.This problem is solved it is necessary to first torsigram picture, passes through the corresponding distortion figure of specific algorithm generation distortion eyeglass Picture, then these fault images will become normal image, so as to allow people to feel after human eye is projected by distortion eyeglass Feel real position projection and the covering of big angular field of view.Current lens manufacturer can come according to certain distortion parameter Eyeglass is made, these eyeglasses are assembled on virtual implementing helmet by the manufacturer of virtual implementing helmet.For common For the user and software developer of virtual implementing helmet, due to can not detect the instrument of eyeglass distortion parameter, except Distortion parameter can not intuitively be obtained by being asked for eyeglass manufacturer beyond distortion parameter, largely have impact on virtual reality The exploitation of software and use.Simultaneously because distortion parameter can not be obtained, the depth of field of virtual implementing helmet can not just be optimized.
The content of the invention
The defect of the depth of field can not be optimized in order to solve current virtual real world devices, the present invention provides a kind of virtual implementing helmet The method and device that interpupillary distance optimizes with depth of field laser.
The technical solution adopted for the present invention to solve the technical problems is:A kind of virtual implementing helmet interpupillary distance and the depth of field are provided The method of laser optimization, comprises the following steps:
S1:Set up the depth of field data under all pupil of left eye of method setting and pupil of right eye combination put separately with laser zone;
S2:In the server that the depth of field data of setting is stored in control virtual implementing helmet;
S3:The server of virtual implementing helmet is controlled to judge corresponding pupil position according to the optical system of virtual implementing helmet Put, and the data for selecting the pupil of left eye closest with the pupil position and the pupil of right eye to combine carry out the depth of field and shown Show.
Preferably, the laser zone is set up separately to put and comprised the following steps:
S10:Image to be placed is divided into accurate setting area and general setting area;
S20:The depth of field method to set up that the imagery exploitation laser of the accurate setting area is set is configured, for institute The depth of field method to set up for stating the imagery exploitation calculating and setting of general setting area is configured;
S30:Fuzzy processing is carried out to image.
Preferably, the calculating and setting comprises the following steps:
S100:The distortion parameter of the virtual implementing helmet is measured first before calculating and setting;
S101:The distortion parameter of virtual implementing helmet to be placed is stored in processing unit;
S102:The angle position of corresponding sight is calculated according to depth of field relation;
S103:Luminous point is gone out according to the angle position backwards calculation of the distortion parameter of virtual implementing helmet to be measured and sight Position on screen.
Preferably, the laser sets and mainly included the following steps that:
S201 determines the image depth position for needing to adjust, and the scale of correspondence on the scale is calculated according to depth of field relation Position D1, D2;
S202:Observation eyepiece is adjusted, the laser for launching laser is corresponded to respectively to be beaten in graduated scale D1, D2 positions;
S203:Display screen shows test image information according to display rule set in advance;
S204:The test image information that processing unit processes observation unit is observed, draws the image depth for needing to adjust The corresponding display abscissa in position and display ordinate.
Preferably, the display screen is shown by column in units of column of pixels from the first end of the display screen to the second end Longitudinal light, when the display information that elementary area detects the display screen reaches the calibration position of observation unit after distortion When, the processing unit records the abscissa positions of light in the now display screen, and the abscissa positions are the display The abscissa positions of the correct display depth of field of screen.
The device that a kind of virtual implementing helmet interpupillary distance and depth of field laser optimize is provided, it is characterised in that including test cell, Observation unit, elementary area and processing unit, the test cell include virtual implementing helmet to be placed, fixed structure, described Virtual implementing helmet to be placed includes display screen, and the fixed structure includes clamping device and position-limit mechanism, the clamping device It can open and be put into the virtual implementing helmet, the observation unit includes interpupillary distance track, set up separately on the interpupillary distance track multiple Pupil of left eye and multiple pupil of right eye.
Preferably, the observation unit further comprises observation eyepiece, eyepiece track and motor, and the observation eyepiece can be with Eyepiece track motion described in drive lower edge in the motor.
Preferably, the observation eyepiece is arranged on eyepiece bottom plate, and the observation eyepiece can be by the eyepiece bottom plate band It is dynamic to carry out transverse shifting.
Preferably, the eyepiece bottom plate is connected via connector with the portion of sliding, and the portion of sliding can be in the interpupillary distance rail Slided on road, and drive the connector and the eyepiece bottom plate to slide together.
Preferably, corresponding to respectively at multiple pupil of left eye and the pupil of right eye position, the portion of sliding can be with Fixed.
The device that a kind of virtual implementing helmet interpupillary distance optimizes with depth of field laser, including test cell, observation unit, figure are provided As unit and processing unit, the test cell includes virtual implementing helmet to be placed, fixed structure, described to be placed virtual existing The real helmet includes display screen, and the fixed structure includes clamping device, position-limit mechanism and graduated scale, and the clamping device can be beaten Open the virtual implementing helmet, the observation unit includes observation eyepiece, is provided with above the observation eyepiece sharp Light device, the laser of the laser transmitting can be beaten forms hot spot on the graduated scale.
Preferably, the observation unit includes observation eyepiece, eyepiece track and motor, and the observation eyepiece can be described Eyepiece track motion described in the drive lower edge of motor.
Preferably, the observation unit includes movable plate, observation eyepiece, shadow shield, eyepiece track and motor, the observation Eyepiece can eyepiece track motion described in the drive lower edge in the motor, the eyepiece track is arranged on the movable plate, The movable plate can drive the observation eyepiece, the motor and the eyepiece track to move together.
Prior art is compared, the present invention observation position different by setting and the method for storing corresponding observation position data The problem of depth of field data changes after adjustment interpupillary distance is solved there is provided one kind, it is ensured that the correct display of the depth of field and virtual reality Feeling of immersion.Scape in the case of being adapted to different interpupillary distances and be a variety of by setting the method for multiple pupil of left eye and pupil of right eye Deep display, eyepiece bottom plate and the setting in portion of sliding can facilitate observation eyepiece to be observed in multiple positions, facilitate diverse location The depth of field set.The display of image is caused to be more nearly the scene that human eye is actually seen using by the method for image zoning, Enhance the feeling of immersion of virtual reality.By the division of accurate setting area and general setting area, used for different zones The different depth of field plans of establishment is effectively improved the Efficiency and accuracy of depth of field setting.Set there is provided laser and calculating and setting Two kinds of depth of field methods to set up, make the setting of the depth of field more facilitate.The present invention is using on the graduated scale calculated according to depth of field relation Correspondence position, using laser launch laser beat graduated scale correspondence position method simulation the depth of field formation, can be directly perceived Ground confirms depth of field position.By the observation for the observation eyepiece being connected with laser, it is determined that needing the image depth position pair adjusted The display abscissa and display ordinate answered are convenient and easy there is provided a kind of upper method for setting the depth of field.By setting demarcation position Putting allows processing unit to differentiate the image that observation eyepiece is observed, judges its requirement for whether meeting the depth of field.Pass through Depth of field relation calculates view directions, calculates the correspondence position of graduated scale.When theoretic luminous point is deposited with actual luminous point In error, the position of the theoretic actual luminous point of luminous point position amendment can be directly utilized, it is ensured that the depth of field meets default It is required that.Depth of field checking is effectively simply solved using the combination of test cell, observation unit, elementary area and processing unit The problem of.Observation unit is moved along eyepiece track motion by motor belt motor, can be facilitated from multiple angles come from carrying out, convenience is more The setting of individual point of observation.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the module diagram that virtual implementing helmet interpupillary distance of the present invention optimizes device first embodiment with depth of field laser;
Fig. 2 is first embodiment test cell module diagram;
Fig. 3 is that virtual implementing helmet interpupillary distance of the present invention optimizes device first embodiment schematic diagram with depth of field laser;
Fig. 4 is that virtual implementing helmet interpupillary distance of the present invention optimizes device first embodiment side schematic view with depth of field laser;
Fig. 5 is virtual implementing helmet depth of field displaying principle schematic diagram;
Fig. 6 is that virtual implementing helmet interpupillary distance of the present invention optimizes device second embodiment structural representation with depth of field laser;
Fig. 7 is the real laser irradiates schematic diagram of the present invention second;
Fig. 8 is second embodiment of the invention shade schematic diagram;
Fig. 9 is virtual implementing helmet interpupillary distance of the present invention and depth of field laser principle of optimality schematic diagram;
Figure 10 is depth of field display effect schematic diagram after interpupillary distance adjustment;
Figure 11 is third embodiment of the invention schematic diagram.
Embodiment
The defect of the depth of field can not be optimized in order to solve current virtual real world devices, the present invention provides a kind of virtual implementing helmet The method and device that interpupillary distance optimizes with depth of field laser.
In order to which technical characteristic, purpose and effect to the present invention are more clearly understood from, now compare accompanying drawing and describe in detail The embodiment of the present invention.
Refer to Fig. 1-Fig. 2, virtual implementing helmet interpupillary distance of the present invention and depth of field laser optimization device include test cell 1, Observation unit 2, elementary area 3 and processing unit 4.Wherein, test cell 1 includes trial lens 12 to be measured, fixed structure 14, to be measured Trial lens 12 is removably attached on fixed structure 14.Elementary area 3 is electrically connected with observation unit 2, processing unit 4 and figure As unit 3 is electrically connected with.Observation unit 2 is observed test cell 1 by way of shooting image, and observation unit 2 can be with The image of test cell 1 is shot, and the image transmitting of shooting to elementary area 3 is handled, elementary area 3 can handle sight The image of the shooting of unit 2 is examined, and result is transferred to processing unit 4 and is handled, processing unit 4 can be according to image list The data of the transmission of member 3 are handled.
Fig. 3-Fig. 4 shows that optimize device with depth of field laser as the virtual implementing helmet interpupillary distance of example first implements Example, display screen 16 is fixedly installed in fixed structure 14, and eyeglass installation portion 18, eyeglass installation portion 18 are provided with fixed structure 14 Can be for installation trial lens 12 to be measured.Observation unit 2 includes observation eyepiece 23, eyepiece track 25, eyepiece motor 271, lifting Motor 272 and elevating lever 273, observation eyepiece 23 can be under the drives of eyepiece motor 271 along the translation of eyepiece track 25, and can With the rotational transform viewing angle under the drive of eyepiece motor 271.Observation eyepiece 23 is connected with elevating lever 273, it is possible to With the lifting of elevating lever 273 1.Elevating lever 273 can be lifted by the control of lifting motor 272 in vertical direction.When in use, Eyepiece motor 271, lifting motor 272 can be coordinated with translation to be rotated and lifts, and observation eyepiece 23 is reached different observation positions, Simulate the light that direction of visual lines observation display screen 16 is launched.
In initial fitting distortion data, fixed structure 14 is removed first, and to be measured show on trial is installed at eyeglass installation portion 18 Fixed structure 14, is then arranged on base 21 by piece 12.Eyepiece motor 271 is resetted, eyepiece motor 271 is reached eyepiece track The initial position of 25 one end.Now, preparation is completed before detecting.After processing unit 4 receives the order for starting detection, Eyepiece motor 271 and lifting motor 272 drive observation eyepiece 23 to reach first point of observation, meanwhile, the order display of processing unit 4 Screen 16 shows detection informations, first, display screen 16 in units of column of pixels from the first end of display screen 16 to the second end by column The longitudinal light of display, first end and the second end are relative, can artificially specify as needed, generally we are specified from The direction of unit 2 to the test cell 1 after fixation sees that the left end of display screen 16 is first end, and right-hand member is the second end, when image list When the display information that member 3 detects display screen 16 reaches the calibration position of observation unit 2 after distortion, elementary area 3 is transmitted Information is to processing unit 4, and processing unit 4 records the abscissa positions of light in the now position of observation unit 2 and display screen 16. Then observation unit 2 moves to next point of observation, and the order test cell 1 of processing unit 4 shows detection information, repeats above-mentioned inspection Survey process.Point of observation quantity sets more, and eyeglass lens measurement result is finer, is just advantageously fitted in data.All After the completion of the detection of point of observation, processing unit 4 collects all corresponding relations, and in the corresponding relation fitting data storehouse according to storage The distortion function of storage.After processing unit 4 is successfully fitted one of them to several distortion functions, processing unit 4 is recorded and deposited Store up the fitting result;When processing unit 4 can not be according to the distortion function in the corresponding relation fitting data storehouse measured, processing is single Member 4 stores corresponding relation in the way of point function.
Fig. 5 shows the field depth principle schematic diagram of virtual implementing helmet.As illustrated, when observer forms figure in vision , it is necessary to right and left eyes collaboration imaging during picture.In Figure 5, the transmitting light of display screen 16 arrives separately at a left side by the refraction of optical mirror slip Right eye, makes right and left eyes visually feel there is image at A, and on display screen 16, corresponding luminous point is respectively A1With A2, material is thus formed the effect of the depth of field.We can optimize to the depth of field, before optimization, and we can be first to void The distortion parameter for intending the real helmet is measured, and the method for measurement is identical with the measuring method to optical mirror slip of embodiment one, The distortion function being fitted using this method measurement, determines the viewing angle and pair of luminous point on display screen 16 of observation unit 2 It should be related to, i.e. the sight of people and the corresponding relation of luminous point on display screen 16.
Fig. 6-Fig. 9 is referred to, Fig. 6-Fig. 9 shows second embodiment of the invention.The second embodiment of the present invention is main Optimized for the display depth of field to virtual implementing helmet.Virtual implementing helmet interpupillary distance of the present invention optimizes device with depth of field laser Second embodiment includes virtual implementing helmet 13 to be placed, fixed structure 14, and virtual implementing helmet 13 to be placed is detachably installed In fixed structure 14, fixed structure 14 includes clamping device 142, position-limit mechanism 141, graduated scale 144 and bottom plate 143, wherein, Clamping device 142 includes torsion spring (not shown), and clamping device 142 can be opened, after virtual implementing helmet 13 to be placed are put into, Torsion spring can act on clamping device 142 and be allowed to close, and play a part of fixed virtual implementing helmet 13 to be placed.Position-limit mechanism 141 can precisely limit the position of virtual implementing helmet 13 to be placed, prevent the position of virtual implementing helmet 13 to be placed from excessively leaning on Preceding or influence optimum results rearward, position-limit mechanism 141, graduated scale 144 and clamping device 142 are fixed on bottom plate 143.Observation is single Member 2 includes two groups of facilities for observations, and two groups of facilities for observations are observed left eye and the corresponding fault image of right eye respectively.Observation is single Member 2 includes observation eyepiece 23, laser 24, eyepiece track 25, motor 27 and shade 29, and observation eyepiece 23 can be in motor Along the rotational transform viewing angle of eyepiece track 25 under 27 drive.When in use, motor 27 can be around virtual left point of observation 26 and right point of observation 28 rotate, observation eyepiece 23 is reached different observation positions, simulation direction of visual lines observation is to be placed virtual The light that the real helmet 13 is launched.Laser 24, which can launch laser, to be beaten and forms luminous point on graduated scale 144.Under laser 24 Side is provided with support bar 241, and support bar 241 can rise the in the vertical direction of laser 24, prevent the light of laser 24 Blocked and can not be beaten on graduated scale 144 by virtual implementing helmet 13 to be placed.Fig. 8 shows the shade as example 29, the slit 291 of shade 29, a diameter of 1mm of slit 291 or so are provided through on shade 29, with certain Depth, for ensureing thin image formation by rays condition, observation eyepiece 23 is accurately observed the light that respective direction is transmitted, prevent The light in other directions produces influence to observation result.Shade 29 is removably mounted on observation eyepiece 23.
Showing that we can be set using calculating and setting and laser when being configured to the depth of field.Carrying out calculating and setting When, we measure to the distortion parameter of virtual implementing helmet first before being set carrying out depth of field display, utilize this The distortion function that method measurement is fitted, determines the viewing angle and the corresponding relation of luminous point on display screen 16 of observation unit 2, The corresponding relation of luminous point i.e. in the sight of people and display screen 16.Then the angle of left and right an eye line is calculated according to depth of field data, And the light spot position on the corresponding display screen 16 of the angle is drawn according to distortion function.Being iteratively repeated this process can treat aobvious The display of all depth of field positions is effectively set on diagram picture.
The setting that the depth of field is shown is converted into mathematical computations by the method for calculating and setting, and there is provided a kind of easy setting side Method, its advantage lies in being able to quickly draw the display data of the depth of field.But it is due to that mathematical computations error unavoidably occur, not necessarily Can meet high definition show and the accurate depth of field display requirement, can not be intuitive to see in addition the depth of field set effect.In order to enter one Accurate depth of field display effect is walked, the scheme that we can be set using laser.
When carrying out laser setting, we calculate the angle of left and right an eye line according to depth of field data, and calculate sight Angle corresponding graduation position D1, D2.Virtual direction of visual lines can also be simulated, according to sight extended line and graduated scale 144 Intersection point determine D1, D2 position.Two observation eyepieces 23 in adjustment left and right, make the laser that its top laser 24 is launched correspond to respectively D1, D2 position of graduated scale 144 are got to, now the intersection point E of two laser and theoretic depth of field position are in same plumb line On.The order display screen 16 of processing unit 4 shows detection information, and first, display screen 16 is in units of column of pixels from display screen 16 First end show longitudinal light by column to the second end, first end and the second end are relative, can artificially specify as needed, typically In the case of we specify in terms of from observation unit 2 to the direction of the test cell 1 after fixation, the left end of display screen 16 is first end, Right-hand member is the second end, when the display information that elementary area 3 detects display screen 16 reaches the demarcation of observation unit 2 after distortion During position, elementary area 3 carries the information to processing unit 4, and processing unit 4 records the abscissa position of light in now display screen 16 Put, the abscissa positions are the abscissa positions that display screen 16 correctly shows the depth of field, because display screen 16 is during display Point left and right sides shows respectively, the image that correspondence left and right two is visually observed, therefore abscissa positions under the same depth of field There are two, two observation eyepieces 23 are corresponded to respectively.After after the abscissa positions of the correct measurement depth of field, display screen 16 is with entire row of pixels Transverse light rays are shown line by line to lower end from the upper end of display screen 16 for unit, when elementary area 3 detects the display of display screen 16 When information reaches the calibration position of observation unit 2 after distortion, elementary area 3 carries the information to processing unit 4, processing unit 4 record the ordinate position of light in now display screen 16, and the ordinate position is that display screen 16 correctly shows the vertical of the depth of field Coordinate position, similarly, ordinate position is also two under the same depth of field.Corresponding display abscissa of depth of field and ordinate are at this It is determined that.Then observation unit 2 moves to next point of observation, and the order test cell 1 of processing unit 4 shows detection information, repeats Above-mentioned detection process.Until measurement in need the depth of field all obtain measurement untill.The calibration position of observation unit 2 can basis Need to specify, the center of the general shooting image that calibration position is arranged on to observation unit 2 is measured for convenience, can be with The target center of one fixed width is set around the position, when luminous point image falls in target center it is considered that the display information of display screen 16 The calibration position of observation unit 2 is reached after distortion.
Human eye has relative choice when display image is observed, when eyes adjustment focal length watches some object, Other images different from the object depth of field will become Relative Fuzzy, and this is the behavior that the mankind are formed during evolution.Cause This to image when showing, if all carrying out depth of field setting, whole picture all right and wrong using the method for laser setting Often clearly, false sensation can be so caused, the feeling of immersion of virtual reality is influenceed.Therefore we need when setting the depth of field Fuzzy Processing is deliberately carried out to the image of some vision edge positions, to form relatively real image scene.We use Laser is set and the laser zone that is combined of calculating and setting sets up the mode put separately to carry out this processing.First, need to set in image When putting the depth of field, we divide for image, mark off accurate setting area and general setting area.Accurate setting area and The criterion of general setting area is main by mainly showing object and and the closely located display of main display object depth Object is divided, because main display object is therefore main aobvious often by the more concern of observer during display The image-region for showing object covering is accurate setting area.Due to the relation of the depth of field, close with main display object depth is aobvious Show that object also can be relatively clear, therefore this part shows that object also falls within accurate viewing area.It is more accurate aobvious in order to meet Show that requirement can set the accurate viewing area of different stage, this can voluntarily be set according to demand by developer.Dividing Go out behind accurate setting area and general setting area, the mode that accurate setting area is set using laser carries out depth of field setting, General setting area carries out depth of field setting using the mode of calculating and setting.Laser zone sets up the mode put separately and both saves the time, again So that the depth of field of accurate setting area shows higher display quality.Image is carried out at obfuscation after the depth of field is provided with Reason, fog-level can according to the different stage of the accurate setting area of setting relative set.Image after the completion of Fuzzy Processing Processing be accuse complete.
During using virtual implementing helmet, because the interpupillary distance of each observer is not fully identical, in order that often The observing effect of individual observer is attained by optimum efficiency, and many virtual implementing helmets add interpupillary distance regulatory function, that is, used Person can adjust position and the display screen 16 of optical mirror slip according to the interpupillary distance of oneself by way of automatically adjusting or adjusting manually Position, but so often produce the depth of field the problem of change, make image fault, reduce the feeling of immersion of virtual reality, Destroy overall experience effect.
Referring to Fig. 10, Figure 10 exemplarily illustrates the figure when user's interpupillary distance of virtual implementing helmet changes As the real schematic diagram of the depth of field.As above D1 and D2 positions in the pupil corresponding diagram of an observer, can according to the adjustment of the present invention So that the depth of field shows clear and correct, the observer can be clearly observable the image of location A, and the depth of field of location A image It is also correct.After optical system of the new observer according to the position adjustment virtual implementing helmet of oneself pupil, newly Observer pupil corresponding diagram in D3 and D4 positions, to allow the depth of field of image of location A correctly to show, the observation The image that person observes needs to show according to dotted line in figure.But because optical system is changed, optical mirror slip is also not necessarily All it is linear refractive, it is difficult to solve that the image depth that how to adjust optical system observes observer, which correctly just turns into one, The problem of, and in this case, the depth of field for the image that the observer observes almost is doomed to be incorrect.Therefore, merely The effect of observation can not be effectively ensured in interpupillary distance adjustment optical system according to the observation.
Figure 11 is referred to, third embodiment of the invention provides a kind of display scape of virtual implementing helmet to interpupillary distance adjustable The method and apparatus being configured deeply.Third embodiment of the invention includes virtual implementing helmet 13 to be placed, fixed structure 14, treats Virtual implementing helmet 13 is set to be removably mounted in fixed structure 14, fixed structure 14 includes clamping device 142, position-limit mechanism 141 and bottom plate 143, wherein, clamping device 142 includes torsion spring (not shown), and clamping device 142 can be opened, wait to set when being put into Put after virtual implementing helmet 13, torsion spring can act on clamping device 142 and be allowed to close, play fixed virtual reality head to be placed The effect of helmet 13.Position-limit mechanism 141 can precisely limit the position of virtual implementing helmet 13 to be placed, prevent to be placed virtual existing The real position of the helmet 13 is excessively forward or influences optimum results rearward, and position-limit mechanism 141 and clamping device 142 are fixed on bottom plate 143 On.Observation unit 2 includes two groups of facilities for observations, and two groups of facilities for observations are seen to left eye and the corresponding fault image of right eye respectively Examine.Observation unit 2 includes observation eyepiece 23, eyepiece track 25, motor 27 and shade 29, and observation eyepiece 23 can be in motor Along the rotational transform viewing angle of eyepiece track 25 under 27 drive.When in use, motor 27 can be around virtual left point of observation 26 and right point of observation 28 rotate, observation eyepiece 23 is reached different observation positions, simulation direction of visual lines observation is to be placed virtual The light that the real helmet 13 is launched.Fig. 7 shows the shade 29 as example, is provided through hiding on shade 29 The slit 291 of electro-optical device 29, a diameter of 1mm of slit 291 or so, with certain depth, for ensureing thin image formation by rays condition, Observation eyepiece 23 is accurately observed the light that respective direction is transmitted, prevent the light in other directions from producing shadow to observation result Ring.Shade 29 is removably mounted on observation eyepiece 23.Observation unit 2 further comprises interpupillary distance track 24, interpupillary distance rail Set up multiple pupil of left eye 260 and multiple pupil of right eye 280, the quantity of pupil of left eye 260 and the number of pupil of right eye 280 on road 24 separately Amount can be configured as requested, and our access amounts are 5 here.Observation eyepiece 23 is arranged on eyepiece bottom plate 233, can be with Carry out transverse shifting is driven by eyepiece bottom plate 233.Eyepiece bottom plate 233 is connected via connector 232 with the portion of sliding 231, slides portion 231 can slide on interpupillary distance track 24, and be slided together with follower link 232 and eyepiece bottom plate 233.It is multiple corresponding to respectively At pupil of left eye 260 and the position of pupil of right eye 280, slide portion 231 and can be fixed, facilitate observation eyepiece 23 to carry out the depth of field and set Put.
When the depth of field, which is set, to be started, the observation eyepiece 23 of correspondence left eye is driven by eyepiece bottom plate 233 is moved to left side first Individual observation position pupil of left eye 260, the observation eyepiece 23 of correspondence right eye is driven by eyepiece bottom plate 233 is moved to the sight of first, left side Position pupil of right eye 280 is examined, while corresponding to first pupil of left eye 260 in left side and right side respectively according to the two eye pupil holes of observer The mode of first pupil of right eye 280 adjusts optical system.The depth of field is configured using the method described in second embodiment, And result will be set to recorded processing unit 4.After after all be provided with, keeping, left side eyepiece bottom plate 233 is motionless, right side eyepiece Bottom plate is moved to second, left side observation position pupil of right eye 280, while corresponding to left side respectively according to the two eye pupil holes of observer The mode of second pupil of right eye 280 of first pupil of left eye 260 and right side adjusts optical system.Utilize institute in second embodiment The method stated is configured to the depth of field, and result will be set to recorded processing unit 4.Aforesaid way is repeated, until all pairs Answer pupil of left eye 260 and the combination of pupil of right eye 280 to be set, all results are stored in processing unit 4.This Sample, optional a pupil of left eye 260 and pupil of right eye 280 can find corresponding depth of field data in processing unit 4.Will place The data storage stored in reason unit 4 is in the server of control virtual implementing helmet, when user have adjusted virtual reality head After the optical system of helmet, server judges the corresponding pupil position of optical system, selection and the closest left side of the pupil position The data that eye pupil hole 260 and pupil of right eye 280 are combined carry out the depth of field and shown.
Compared with prior art, the present invention observation position different by setting and the side of the corresponding observation position data of storage Method, which is provided, a kind of solves the problem of depth of field data changes after adjustment interpupillary distance, it is ensured that the correct display of the depth of field and virtual existing Real feeling of immersion.By setting the method for multiple pupil of left eye 260 and pupil of right eye 280 to be adapted to different interpupillary distances and a variety of feelings The depth of field under condition shows that eyepiece bottom plate 233 can facilitate observation eyepiece 23 to be seen in multiple positions with the setting for sliding portion 231 Examine, facilitate the depth of field of diverse location to set.The display of image is caused to be more nearly human eye using by the method for image zoning The scene actually seen, enhances the feeling of immersion of virtual reality.It is right by the division of accurate setting area and general setting area The Efficiency and accuracy of depth of field setting is effectively improved using the different depth of field plans of establishment in different zones.There is provided laser Set and two kinds of depth of field methods to set up of calculating and setting, the setting of the depth of field is more facilitated.The present invention is using according to depth of field relation meter The correspondence position on graduated scale 144 calculated, the method mould in the correspondence position of graduated scale 144 is beaten using the transmitting laser of laser 24 Intend the formation of the depth of field, can intuitively confirm depth of field position.By the observation for the observation eyepiece 23 being connected with laser 24, it is determined that Needing the corresponding display abscissa in image depth position of adjustment, there is provided a kind of side of the upper setting depth of field with ordinate is shown Method, it is convenient and easy.The image for alloing processing unit to observe observation eyepiece 23 by setting calibration position differentiates, sentences Breaking, whether it meets the requirement of the depth of field.View directions are calculated by depth of field relation, the correspondence position of graduated scale 144 is calculated. , can be directly actual using theoretic luminous point position amendment when theoretic luminous point and actual luminous point have error The position of luminous point, it is ensured that the depth of field meets preset requirement.Utilize test cell 1, observation unit 2, elementary area 3 and processing unit 4 combination effectively simply solves the problem of depth of field is verified.Observation unit 2 is driven to be transported along eyepiece track 25 by motor 27 It is dynamic, it can facilitate from multiple angles from carrying out, to facilitate the setting of multiple points of observation.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned specific Embodiment, above-mentioned embodiment is only schematical, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of present inventive concept and scope of the claimed protection is not departed from, it can also make a lot Form, these are belonged within the protection of the present invention.

Claims (10)

1. a kind of method that virtual implementing helmet interpupillary distance optimizes with depth of field laser, it is characterised in that comprise the following steps:
S1:Set up the depth of field data under all pupil of left eye of method setting and pupil of right eye combination put separately with laser zone;
S2:In the server that the depth of field data of setting is stored in control virtual implementing helmet;
S3:The server of control virtual implementing helmet judges corresponding pupil position according to the optical system of virtual implementing helmet, And the data for selecting the pupil of left eye closest with the pupil position and the pupil of right eye to combine carry out the depth of field and shown.
2. the method that virtual implementing helmet interpupillary distance according to claim 1 optimizes with depth of field laser, it is characterised in that described Set up separately to put and comprise the following steps in laser zone:
S10:Image to be placed is divided into accurate setting area and general setting area;
S20:The depth of field method to set up that the imagery exploitation laser of the accurate setting area is set is configured, for described one As the depth of field method to set up of imagery exploitation calculating and setting of setting area be configured;
S30:Fuzzy processing is carried out to image.
3. the method that virtual implementing helmet interpupillary distance according to claim 2 optimizes with depth of field laser, it is characterised in that described Calculating and setting comprises the following steps:
S100:The distortion parameter of the virtual implementing helmet is measured first before calculating and setting;
S101:The distortion parameter of virtual implementing helmet to be placed is stored in processing unit;
S102:The angle position of corresponding sight is calculated according to depth of field relation;
S103:Luminous point is gone out in screen according to the angle position backwards calculation of the distortion parameter of virtual implementing helmet to be measured and sight Position on curtain.
4. the method that virtual implementing helmet interpupillary distance according to claim 2 optimizes with depth of field laser, it is characterised in that described Laser sets and mainly included the following steps that:
S201 determines the image depth position for needing to adjust, and the graduation position of correspondence on the scale is calculated according to depth of field relation D1、D2;
S202:Observation eyepiece is adjusted, the laser for launching laser is corresponded to respectively to be beaten in graduated scale D1, D2 positions;
S203:Display screen shows test image information according to display rule set in advance;
S204:The test image information that processing unit processes observation unit is observed, draws the image depth position for needing to adjust Corresponding display abscissa and display ordinate.
5. the method that virtual implementing helmet interpupillary distance according to claim 4 optimizes with depth of field laser, it is characterised in that described Display screen shows longitudinal light by column in units of column of pixels from the first end of the display screen to the second end, works as elementary area When the display information for detecting the display screen reaches the calibration position of observation unit after distortion, the processing unit record Now in the display screen light abscissa positions, the abscissa positions are the horizontal seat that the display screen correctly shows the depth of field Cursor position.
6. a kind of method described in utilization claim 1 sets the dress that the virtual implementing helmet interpupillary distance of the depth of field optimizes with depth of field laser Put, it is characterised in that including test cell, observation unit, elementary area and processing unit, the test cell includes to be placed Virtual implementing helmet, fixed structure, the virtual implementing helmet to be placed include display screen, and the fixed structure includes clamping work Tool and position-limit mechanism, the clamping device, which can be opened, is put into the virtual implementing helmet, and the observation unit includes interpupillary distance rail Set up multiple pupil of left eye and multiple pupil of right eye separately on road, the interpupillary distance track.
7. the device that virtual implementing helmet interpupillary distance according to claim 6 optimizes with depth of field laser, it is characterised in that described Observation unit further comprises observation eyepiece, eyepiece track and motor, and the observation eyepiece can be under the drive of the motor Along the eyepiece track motion.
8. the device that virtual implementing helmet interpupillary distance according to claim 7 optimizes with depth of field laser, it is characterised in that described Observation eyepiece is arranged on eyepiece bottom plate, and the observation eyepiece can be driven by the eyepiece bottom plate and carry out transverse shifting.
9. the device that virtual implementing helmet interpupillary distance according to claim 8 optimizes with depth of field laser, it is characterised in that described Eyepiece bottom plate is connected via connector with the portion of sliding, and the portion of sliding can be in the interpupillary distance sliding on rails, and drives described Connector and the eyepiece bottom plate are slided together.
10. the device that virtual implementing helmet interpupillary distance according to claim 9 optimizes with depth of field laser, it is characterised in that Correspond to respectively at multiple pupil of left eye and the pupil of right eye position, the portion of sliding can be fixed.
CN201710544194.5A 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser Pending CN107329265A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016213083149 2016-11-30
CN201621308314 2016-11-30

Publications (1)

Publication Number Publication Date
CN107329265A true CN107329265A (en) 2017-11-07

Family

ID=60100336

Family Applications (35)

Application Number Title Priority Date Filing Date
CN201710543865.6A Pending CN107702894A (en) 2016-11-30 2017-07-05 The method and device of virtual reality eyeglass dispersion detection
CN201710544198.3A Pending CN107544149A (en) 2016-11-30 2017-07-05 Region depth of field method to set up and device based on image scale
CN201710544197.9A Pending CN107505708A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field method to set up and device based on image scale
CN201710544195.XA Pending CN107329266A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region is set
CN201710544211.5A Pending CN107300775A (en) 2016-11-30 2017-07-05 The depth of field based on image scale sets the method and device of optimization
CN201710543941.3A Pending CN107390364A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser is set
CN201710543937.7A Pending CN107490861A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization display
CN201710543924.XA Pending CN107357037A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet laser assisted depth of field optimization
CN201710543920.1A Pending CN108121068A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field laser sets the method and device of optimization display
CN201710543918.4A Pending CN107687936A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual implementing helmet dispersion corresponding to scale
CN201710544189.4A Withdrawn CN107357039A (en) 2016-11-30 2017-07-05 Virtual reality eyeglass distortion checking and the method and device of adjustment
CN201710544192.6A Pending CN107544148A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser based on image scale is set
CN201710544212.XA Pending CN107300776A (en) 2016-11-30 2017-07-05 Interpupillary distance depth of field method to set up and device based on image scale
CN201710543923.5A Pending CN107688387A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet dispersion detection
CN201710544200.7A Pending CN107479188A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization
CN201710543919.9A Pending CN107422479A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field method to set up and device corresponding to scale
CN201710543936.2A Pending CN107462991A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is set
CN201710543939.6A Pending CN107526167A (en) 2016-11-30 2017-07-05 The method and device optimized based on depth of field laser corresponding to scale
CN201710544202.6A Pending CN107402448A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with depth of field laser
CN201710544203.0A Pending CN107340595A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field region laser corresponding to scale
CN201710544194.5A Pending CN107329265A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser
CN201710544210.0A Pending CN107544151A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field zone approach and device corresponding to scale
CN201710543938.1A Pending CN107357038A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance and the method and device of depth of field adjustment
CN201710543942.8A Pending CN107329264A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with the depth of field
CN201710544204.5A Withdrawn CN107464221A (en) 2016-11-30 2017-07-05 Based on the method and device of virtual reality eyeglass distortion checking and adjustment corresponding to scale
CN201710543925.4A Pending CN107329263A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is shown
CN201710544201.1A Pending CN107291246A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field measurement based on image scale
CN201710544196.4A Pending CN107315251A (en) 2016-11-30 2017-07-05 Based on the corresponding virtual implementing helmet interpupillary distance of scale and depth of field method to set up and device
CN201710543921.6A Pending CN107300774A (en) 2016-11-30 2017-07-05 Method and device based on the corresponding virtual implementing helmet distortion checking of scale and adjustment
CN201710543922.0A Pending CN107462400A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual reality eyeglass dispersion corresponding to scale
CN201710543944.7A Pending CN107544147A (en) 2016-11-30 2017-07-05 The method and device that depth of field laser based on image scale is set
CN201710544208.3A Pending CN107290854A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance optimizes the method and device of display with the depth of field
CN201710544199.8A Pending CN107544150A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field laser corresponding to scale
CN201710544213.4A Withdrawn CN107478412A (en) 2016-11-30 2017-07-05 Virtual implementing helmet distortion checking and the method and device of adjustment
CN201710544205.XA Pending CN107315252A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region laser is set

Family Applications Before (20)

Application Number Title Priority Date Filing Date
CN201710543865.6A Pending CN107702894A (en) 2016-11-30 2017-07-05 The method and device of virtual reality eyeglass dispersion detection
CN201710544198.3A Pending CN107544149A (en) 2016-11-30 2017-07-05 Region depth of field method to set up and device based on image scale
CN201710544197.9A Pending CN107505708A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field method to set up and device based on image scale
CN201710544195.XA Pending CN107329266A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region is set
CN201710544211.5A Pending CN107300775A (en) 2016-11-30 2017-07-05 The depth of field based on image scale sets the method and device of optimization
CN201710543941.3A Pending CN107390364A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser is set
CN201710543937.7A Pending CN107490861A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization display
CN201710543924.XA Pending CN107357037A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet laser assisted depth of field optimization
CN201710543920.1A Pending CN108121068A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field laser sets the method and device of optimization display
CN201710543918.4A Pending CN107687936A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual implementing helmet dispersion corresponding to scale
CN201710544189.4A Withdrawn CN107357039A (en) 2016-11-30 2017-07-05 Virtual reality eyeglass distortion checking and the method and device of adjustment
CN201710544192.6A Pending CN107544148A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser based on image scale is set
CN201710544212.XA Pending CN107300776A (en) 2016-11-30 2017-07-05 Interpupillary distance depth of field method to set up and device based on image scale
CN201710543923.5A Pending CN107688387A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet dispersion detection
CN201710544200.7A Pending CN107479188A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization
CN201710543919.9A Pending CN107422479A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field method to set up and device corresponding to scale
CN201710543936.2A Pending CN107462991A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is set
CN201710543939.6A Pending CN107526167A (en) 2016-11-30 2017-07-05 The method and device optimized based on depth of field laser corresponding to scale
CN201710544202.6A Pending CN107402448A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with depth of field laser
CN201710544203.0A Pending CN107340595A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field region laser corresponding to scale

Family Applications After (14)

Application Number Title Priority Date Filing Date
CN201710544210.0A Pending CN107544151A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field zone approach and device corresponding to scale
CN201710543938.1A Pending CN107357038A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance and the method and device of depth of field adjustment
CN201710543942.8A Pending CN107329264A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with the depth of field
CN201710544204.5A Withdrawn CN107464221A (en) 2016-11-30 2017-07-05 Based on the method and device of virtual reality eyeglass distortion checking and adjustment corresponding to scale
CN201710543925.4A Pending CN107329263A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is shown
CN201710544201.1A Pending CN107291246A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field measurement based on image scale
CN201710544196.4A Pending CN107315251A (en) 2016-11-30 2017-07-05 Based on the corresponding virtual implementing helmet interpupillary distance of scale and depth of field method to set up and device
CN201710543921.6A Pending CN107300774A (en) 2016-11-30 2017-07-05 Method and device based on the corresponding virtual implementing helmet distortion checking of scale and adjustment
CN201710543922.0A Pending CN107462400A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual reality eyeglass dispersion corresponding to scale
CN201710543944.7A Pending CN107544147A (en) 2016-11-30 2017-07-05 The method and device that depth of field laser based on image scale is set
CN201710544208.3A Pending CN107290854A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance optimizes the method and device of display with the depth of field
CN201710544199.8A Pending CN107544150A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field laser corresponding to scale
CN201710544213.4A Withdrawn CN107478412A (en) 2016-11-30 2017-07-05 Virtual implementing helmet distortion checking and the method and device of adjustment
CN201710544205.XA Pending CN107315252A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region laser is set

Country Status (1)

Country Link
CN (35) CN107702894A (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108008535A (en) * 2017-11-17 2018-05-08 国网山东省电力公司 A kind of augmented reality equipment
CN107977076B (en) * 2017-11-17 2018-11-27 国网山东省电力公司泰安供电公司 A kind of wearable virtual reality device
CN107942517B (en) * 2018-01-02 2020-03-06 京东方科技集团股份有限公司 VR head-mounted display device and display method thereof
CN108303798B (en) * 2018-01-15 2020-10-09 海信视像科技股份有限公司 Virtual reality helmet, virtual reality helmet interpupillary distance adjusting method and device
CN108426702B (en) * 2018-01-19 2020-06-02 华勤通讯技术有限公司 Dispersion measurement device and method of augmented reality equipment
CN108399606B (en) * 2018-02-02 2020-06-26 北京奇艺世纪科技有限公司 Image adjusting method and device
CN108510549B (en) * 2018-03-27 2022-01-04 京东方科技集团股份有限公司 Distortion parameter measuring method, device and system of virtual reality equipment
CN109186957B (en) * 2018-09-17 2024-05-10 浙江晶正光电科技有限公司 High-precision automatic detection equipment for diffusion angle of laser diffusion sheet
CN109557669B (en) * 2018-11-26 2021-10-12 歌尔光学科技有限公司 Method for determining image drift amount of head-mounted display equipment and head-mounted display equipment
US11513346B2 (en) * 2019-05-24 2022-11-29 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for controlling virtual reality display device
CN110320009A (en) * 2019-06-25 2019-10-11 歌尔股份有限公司 Optical property detection method and detection device
CN113822104B (en) * 2020-07-07 2023-11-03 湖北亿立能科技股份有限公司 Artificial intelligence surface of water detecting system based on virtual scale of many candidates
CN113768240A (en) * 2021-08-30 2021-12-10 航宇救生装备有限公司 Method for adjusting imaging position of display protection helmet
CN114089508B (en) * 2022-01-19 2022-05-03 茂莱(南京)仪器有限公司 Wide-angle projection lens for detecting optical waveguide AR lens
DE102022207774A1 (en) 2022-07-28 2024-02-08 Robert Bosch Gesellschaft mit beschränkter Haftung Method for an automated calibration of a virtual retinal display for data glasses, calibration device and virtual retinal display
CN117214025B (en) * 2023-11-08 2024-01-12 广东德鑫体育产业有限公司 Helmet lens detection device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5619373A (en) * 1995-06-07 1997-04-08 Hasbro, Inc. Optical system for a head mounted display
CN102967473B (en) * 2012-11-30 2015-04-29 奇瑞汽车股份有限公司 Driver front-view measuring device
US10228562B2 (en) * 2014-02-21 2019-03-12 Sony Interactive Entertainment Inc. Realtime lens aberration correction from eye tracking
EP3200148B1 (en) * 2014-10-31 2019-08-28 Huawei Technologies Co., Ltd. Image processing method and device
CN104808342B (en) * 2015-04-30 2017-12-12 杭州映墨科技有限公司 The optical lens structure of the wearable virtual implementing helmet of three-dimensional scenic is presented
US10271042B2 (en) * 2015-05-29 2019-04-23 Seeing Machines Limited Calibration of a head mounted eye tracking system
CN105979243A (en) * 2015-12-01 2016-09-28 乐视致新电子科技(天津)有限公司 Processing method and device for displaying stereo images
CN105979252A (en) * 2015-12-03 2016-09-28 乐视致新电子科技(天津)有限公司 Test method and device
CN105867606A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Image acquisition method and apparatus in virtual reality helmet, and virtual reality helmet
CN105869142A (en) * 2015-12-21 2016-08-17 乐视致新电子科技(天津)有限公司 Method and device for testing imaging distortion of virtual reality helmets
CN105787980B (en) * 2016-03-17 2018-12-25 北京牡丹视源电子有限责任公司 A kind of detection virtual reality shows the method and system of equipment field angle
CN106028013A (en) * 2016-04-28 2016-10-12 努比亚技术有限公司 Wearable device, display device, and display output adjusting method
CN105791789B (en) * 2016-04-28 2019-03-19 努比亚技术有限公司 The method of helmet, display equipment and adjust automatically display output
CN106441212B (en) * 2016-09-18 2020-07-28 京东方科技集团股份有限公司 Device and method for detecting field angle of optical instrument
CN106527733A (en) * 2016-11-30 2017-03-22 深圳市虚拟现实技术有限公司 Virtual-reality helmet distortion fitting-detecting method and device
CN106651954A (en) * 2016-12-27 2017-05-10 天津科技大学 Laser simulation method and device for space sight line benchmark

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高鸿锦 等: "数码照相机可换镜头使用完全手册", pages: 149 - 153 *

Also Published As

Publication number Publication date
CN107390364A (en) 2017-11-24
CN107422479A (en) 2017-12-01
CN107300776A (en) 2017-10-27
CN107291246A (en) 2017-10-24
CN107340595A (en) 2017-11-10
CN107300774A (en) 2017-10-27
CN107300775A (en) 2017-10-27
CN107329264A (en) 2017-11-07
CN107544147A (en) 2018-01-05
CN108121068A (en) 2018-06-05
CN107462991A (en) 2017-12-12
CN107544150A (en) 2018-01-05
CN107478412A (en) 2017-12-15
CN107479188A (en) 2017-12-15
CN107357039A (en) 2017-11-17
CN107315252A (en) 2017-11-03
CN107462400A (en) 2017-12-12
CN107687936A (en) 2018-02-13
CN107290854A (en) 2017-10-24
CN107329263A (en) 2017-11-07
CN107329266A (en) 2017-11-07
CN107544151A (en) 2018-01-05
CN107490861A (en) 2017-12-19
CN107464221A (en) 2017-12-12
CN107357037A (en) 2017-11-17
CN107526167A (en) 2017-12-29
CN107505708A (en) 2017-12-22
CN107688387A (en) 2018-02-13
CN107402448A (en) 2017-11-28
CN107544148A (en) 2018-01-05
CN107357038A (en) 2017-11-17
CN107315251A (en) 2017-11-03
CN107544149A (en) 2018-01-05
CN107702894A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107329265A (en) The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser
Rolland et al. Towards quantifying depth and size perception in virtual environments
JP5814345B2 (en) Eyepiece tool for refractive evaluation
CN105828699B (en) For measuring the device and method of subjective dioptric
US10993612B1 (en) Systems and methods for visual field testing in head-mounted displays
CN106441822A (en) Virtual reality headset distortion detection method and device
CN106644404A (en) Virtual reality helmet distortion complete machine detection method and device
US20220125297A1 (en) Device calibration via a projective transform matrix
CN106527733A (en) Virtual-reality helmet distortion fitting-detecting method and device
CN209417442U (en) The adjustment test device of optics module in binocular helmet
US20220125298A1 (en) Active calibration of head-mounted displays
EP4236755A1 (en) Systems and methods for visual field testing in head-mounted displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20171107