CN107544151A - Based on virtual implementing helmet depth of field zone approach and device corresponding to scale - Google Patents

Based on virtual implementing helmet depth of field zone approach and device corresponding to scale Download PDF

Info

Publication number
CN107544151A
CN107544151A CN201710544210.0A CN201710544210A CN107544151A CN 107544151 A CN107544151 A CN 107544151A CN 201710544210 A CN201710544210 A CN 201710544210A CN 107544151 A CN107544151 A CN 107544151A
Authority
CN
China
Prior art keywords
scale
depth
observation
implementing helmet
virtual implementing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710544210.0A
Other languages
Chinese (zh)
Inventor
姜燕冰
党少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Virtual Reality Technology Co Ltd
Original Assignee
Shenzhen Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Virtual Reality Technology Co Ltd filed Critical Shenzhen Virtual Reality Technology Co Ltd
Publication of CN107544151A publication Critical patent/CN107544151A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0242Testing optical properties by measuring geometrical properties or aberrations
    • G01M11/0257Testing optical properties by measuring geometrical properties or aberrations by analyzing the image formed by the object to be tested
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0176Head mounted characterised by mechanical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/011Head-up displays characterised by optical features comprising device for correcting geometrical aberrations, distortion
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0127Head-up displays characterised by optical features comprising devices increasing the depth of field
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0149Head-up displays characterised by mechanical features
    • G02B2027/0161Head-up displays characterised by mechanical features characterised by the relative positioning of the constitutive elements
    • G02B2027/0163Electric or electronic control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Eyeglasses (AREA)

Abstract

The present invention provides one kind and is based on virtual implementing helmet depth of field zone approach and device corresponding to scale, including test cell, observation unit, elementary area and processing unit, the test cell includes virtual implementing helmet to be placed, fixed structure, the virtual implementing helmet to be placed includes display screen, the fixed structure includes clamping device and position-limit mechanism, and the clamping device, which can be opened, is put into the virtual implementing helmet.Compared with prior art, the present invention effectively simply solves the problems, such as depth of field measurement using the combination of test cell, observation unit, elementary area and processing unit.By motor driven observation unit along eyepiece track motion, can facilitate from multiple angles from carrying out, to facilitate the setting of multiple points of observation.

Description

Based on virtual implementing helmet depth of field zone approach and device corresponding to scale
Technical field
The present invention relates to field of virtual reality, and virtual implementing helmet corresponding to scale is based on more specifically to one kind Depth of field zone approach and device.
Background technology
Distortion eyeglass has application in many fields, for example, in virtual reality system, in order to allow user visually Possess real feeling of immersion, virtual reality device will cover the visual range of human eye as far as possible, therefore just need virtual Real world devices fill a specific sphere radian eyeglass, but are projected traditional image using Arc lenses in the eye of people When, image is distortion, and human eye just has no idea to obtain the positioning in Virtual Space, i.e., your periphery is all in virtual reality It is the image of distortion.This problem is solved it is necessary to first torsigram picture, is generated by specific algorithm corresponding to distortion eyeglass Fault image, then these fault images by distortion eyeglass project human eye after, normal image will be become, from And people is allowed to feel real position projection and the covering of big angular field of view.Current lens manufacturer can be according to certain Distortion parameter makes eyeglass, and these eyeglasses are assembled into virtual implementing helmet by the manufacturer of virtual implementing helmet On.For the user and software developer of common virtual implementing helmet, due to can not detect eyeglass distortion ginseng Several instruments, distortion parameter can not be intuitively obtained in addition to asking for distortion parameter to eyeglass manufacturer, largely It has impact on the exploitation and use of virtual reality software.Simultaneously because distortion parameter can not be obtained, just can not be to virtual implementing helmet The depth of field show and be configured.
The content of the invention
In order to solve the defects of current virtual real world devices can not optimize the depth of field, the present invention provides one kind and is based on scale pair The method that the virtual implementing helmet depth of field region answered is set, comprises the following steps:
S1:Processing unit is handled picture to be placed, marks off solid color regions;
S2:Scale setting is carried out to the depth of field corresponding to the boundary point of the solid color regions, draws the side of the solid color regions The closed figure that display screen display location corresponding to boundary's point and the display screen display location surround;
S3:Identical display content is filled in the closed figure surrounded on the display screen;
S4:After being provided with after all solid color regions, other regions of picture to be placed are installed with source, laser apparatus.
Preferably, the scale sets and comprised the following steps:
S10:The angle position of observation unit, phase is moved to by the observation unit according to corresponding to calculating depth of field relation The angle position answered;
S20:Display information is observed and identified by the way of two pacing amounts, it is determined that it was observed that figure scale;
S30:The figure scale that the processing unit arrives according to the observation determines the display location of the depth of field to be placed.
Preferably, the two pacings examination comprises the following steps:
S201:Multiple regions are marked off on unilateral display screen, wherein, shown in single region described in multiple identicals Figure scale, each region show the different figure scales, the figure scale that observation unit arrives according to the observation respectively Feature determine the position of viewing area;
S202:When it is determined that behind display region, the display screen is according to the region in each the figure scale is shown Hold the mode that differs to show the region content again.
Preferably, in the multiple regions marked off on unilateral display screen, a pixel is vacated between adjacent region not Show any color as border.
Preferably, vacate a pixel between the adjacent figure scale and do not show any color, as figure scale Border.
Preferably, step S40 is further comprised:The observation unit moves to next point for needing to set the depth of field, weight Multiple above-mentioned steps, complete setting in need the depth of field.
The device that a kind of depth of field is set, including test cell, observation unit, elementary area and processing unit are provided, it is described Test cell includes virtual implementing helmet to be placed, fixed structure, and the virtual implementing helmet to be placed includes display screen, institute Stating fixed structure includes clamping device and position-limit mechanism, and the clamping device, which can be opened, is put into the virtual implementing helmet.
Preferably, the clamping device includes torsion spring, and the torsion spring can act on institute after clamping device opening State clamping device and be allowed to closure with the fixation virtual implementing helmet.
Preferably, the observation unit includes observation eyepiece, eyepiece track and motor, and the observation eyepiece can be in institute State eyepiece track motion described in the drive lower edge of motor.
Preferably, the observation unit includes movable plate, observation eyepiece, shadow shield, eyepiece track and motor, the sight Observation of eyes mirror can eyepiece track motion described in the drive lower edge in the motor, the eyepiece track is arranged on the movable plate On, the movable plate can drive the observation eyepiece, the motor and the eyepiece track to move together.
Preferably, the shadow shield includes loophole.
Compared with prior art, the present invention is marked off solid color regions and its border is configured using processing unit Method, substantially increase the efficiency of depth of field setting.View directions are calculated using according to depth of field relation, and utilize two pacing amounts Mode coordinate the observation of observation unit to draw the display location of the depth of field to be placed, there is provided a kind of novel setting depth of field Method.Two pacings examination can both improve the accuracy of depth of field setting, can improve the efficiency of setting again.Vacate the picture not shown Element can prevent from obscuring obscuring between figure scale between region as border.Using test cell, observation unit, The combination of elementary area and processing unit effectively simply solves the problems, such as depth of field optimization.Observed by motor driven single Member can facilitate from multiple angles from carrying out, to facilitate the setting of multiple points of observation along eyepiece track motion.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is the module diagram of first embodiment of the invention;
Fig. 2 is first embodiment test cell module diagram;
Fig. 3 is first embodiment of the invention schematic diagram;
Fig. 4 is first embodiment of the invention side schematic view;
Fig. 5 is virtual implementing helmet depth of field displaying principle schematic diagram of the present invention;
Fig. 6 is second embodiment of the invention structural representation;
Fig. 7 is second embodiment of the invention shade schematic diagram;
Fig. 8 is the schematic diagram that display screen shows figure scale;
Fig. 9 is third embodiment of the invention picture schematic diagram to be tested;
Figure 10 is third embodiment of the invention measurement solid color regions border schematic diagram.
Embodiment
In order to solve the defects of current virtual real world devices can not optimize the depth of field, the present invention provides one kind and is based on scale pair The virtual implementing helmet depth of field zone approach and device answered.
In order to which technical characteristic, purpose and the effect of the present invention is more clearly understood, now compares accompanying drawing and describe in detail The embodiment of the present invention.
Fig. 1-Fig. 2 is referred to, virtual implementing helmet depth of field display device of the present invention includes test cell 1, observation unit 2nd, elementary area 3 and processing unit 4.Wherein, test cell 1 includes trial lens 12 to be measured, fixed structure 14, trial lens to be measured 12 are removably attached on fixed structure 14.Elementary area 3 is electrically connected with observation unit 2, processing unit 4 and image list Member 3 is electrically connected with.Observation unit 2 is observed test cell 1 by way of shooting image, and observation unit 2 can be shot The image of test cell 1, and the image transmitting of shooting to elementary area 3 is handled, elementary area 3 can handle observation The image that unit 2 is shot, and result is transferred to processing unit 4 and handled, processing unit 4 can be according to image list The data of the transmission of member 3 are handled.
Fig. 3-Fig. 4 shows the first embodiment of the virtual implementing helmet depth of field display device as example, display screen 16 are fixedly installed in fixed structure 14, and eyeglass installation portion 18 is provided with fixed structure 14, and eyeglass installation portion 18 can be used for Trial lens 12 to be measured is installed.Observation unit 2 includes observation eyepiece 23, eyepiece track 25, eyepiece motor 271, lifting motor 272 With elevating lever 273, observation eyepiece 23 can be under the drive of eyepiece motor 271 along the translation of eyepiece track 25, and can be in mesh Rotational transform viewing angle under the drive of mirror motor 271.Observation eyepiece 23 is connected with elevating lever 273, and can follow lifting The lifting of bar 273 1.Elevating lever 273 can be lifted by the control of lifting motor 272 in vertical direction.When in use, eyepiece Motor 271, lifting motor 272 can be coordinated with translation to be rotated and lifts, and observation eyepiece 23 is reached different observation positions, mould Intend the light that direction of visual lines observation display screen 16 is launched.
In initial fitting distortion data, fixed structure 14 is removed first, and to be measured show on trial is installed at eyeglass installation portion 18 Piece 12, then fixed structure 14 is arranged on base 21.Eyepiece motor 271 is resetted, eyepiece motor 271 is reached eyepiece rail The initial position of the one end in road 25.Now, preparation is completed before detection.When processing unit 4 receives the order for starting to detect Afterwards, eyepiece motor 271 and lifting motor 272 drive observation eyepiece 23 to reach first point of observation, meanwhile, processing unit 4 is ordered Display screen 16 shows detection information, and first, display screen 16 is in units of column of pixels from the first end of display screen 16 to the second end Longitudinal light is shown by column, and first end and the second end are relative, can artificially specify as needed, and generally we specify In terms of from observation unit 2 to the direction of the test cell 1 after fixation, the left end of display screen 16 is first end, and right-hand member is the second end, When elementary area 3 detects that the display information of display screen 16 reaches the calibration position of observation unit 2 after distortion, image Unit 3 carries the information to processing unit 4, and processing unit 4 records light in the now position of observation unit 2 and display screen 16 Abscissa positions.Then observation unit 2 moves to next point of observation, the order test cell 1 of processing unit 4 display detection letter Breath, repeat above-mentioned detection process.Point of observation quantity sets more, and eyeglass lens measurement result is finer, just advantageously in Data are fitted.After the completion of the detection of all points of observation, processing unit 4 collects all corresponding relations, and according to the correspondence of storage The distortion function stored in relation fitting data storehouse.After processing unit 4, which is successfully fitted one of them, arrives several distortion functions, Processing unit 4 records and stores the fitting result;When processing unit 4 can not be according in the corresponding relation fitting data storehouse measured Distortion function when, processing unit 4 stores corresponding relation in a manner of point function.
Referring to Fig. 5, Fig. 5 shows that the present invention is based on virtual implementing helmet depth of field zone approach corresponding to scale And installation method principle schematic.As illustrated, when observer forms image in vision, it is necessary to right and left eyes collaboration imaging. In Figure 5, the transmitting light of display screen 16 arrives separately at right and left eyes by the refraction of optical mirror slip, right and left eyes is visually felt Image at A be present in feel, and on display screen 16, corresponding luminous point is respectively A1And A2, material is thus formed the effect of the depth of field Fruit.
Fig. 6-Fig. 7 is referred to, Fig. 6 shows second embodiment of the invention.The second embodiment of the present invention is mainly used in The display depth of field to virtual implementing helmet optimizes.It is to be placed including virtual implementing helmet 13 to be placed, fixed structure 14 Virtual implementing helmet 13 is removably mounted in fixed structure 14, and fixed structure 14 includes clamping device 142, position-limit mechanism 141 With bottom plate 143, wherein, clamping device 142 includes torsion spring (not shown), and clamping device 142 can be opened, to be placed when being put into After virtual implementing helmet 13, torsion spring can act on clamping device 142 and be allowed to close, and play fixed virtual reality head to be placed The effect of helmet 13.Position-limit mechanism 141 can precisely limit the position of virtual implementing helmet 13 to be placed, prevent to be placed virtual The real position of the helmet 13 is excessively forward or influences optimum results rearward, and position-limit mechanism 141 and clamping device 142 are fixed on bottom plate On 143.Observation unit 2 includes two groups of facilities for observations, and two groups of facilities for observations enter to fault image corresponding to left eye and right eye respectively Row observation.Observation unit 2 includes observation eyepiece 23, eyepiece track 25, motor 27 and shade 29, and observation eyepiece 23 can be with Along the rotational transform viewing angle of eyepiece track 25 under the drive of motor 27.When in use, motor 27 can surround a virtual left side Point of observation 26 and right point of observation 28 are rotated, and observation eyepiece 23 is reached different observation positions, and simulation direction of visual lines observation is treated The light for setting virtual implementing helmet 13 to launch.Fig. 7 shows the shade 29 as example, is set on shade 29 The slit 291,291 a diameter of 1mm of slit or so through shade 29 is equipped with, there is certain depth, for ensureing thin light Line imaging condition, observation eyepiece 23 is accurately observed the light that respective direction transmits, prevent the light in other directions to seeing Result is examined to have an impact.Shade 29 is removably mounted on observation eyepiece 23.
Fig. 8 shows that display screen 16 shows the schematic diagram of figure scale 161.When a measurement is started, display screen 16 receives The order of processing unit 4 shows dot matrix in center Screen, and figure scale 161 is shown on dot matrix, and figure scale 161 is different The combination of color point, figure scale 161 is schematically illustrated in figure.Here, we take figure scale 161 as nine of 3x3 The figure of pixel composition.Each pixel can show three kinds of colors of red, green, blue, by nine pixels show different colors come Distinguish different figure scales 161.The pixel of composition figure scale 161 is more, and the precision of measurement will be lower, and forms figure The pixel of shape scale 161 very little, then can lead to not effective district point graphical pixel 161 and largely effect on measurement efficiency.Set 9 The figure scale 161 of individual pixel composition 3x3 checkerings, the efficiency of measurement can be effectively improved while measurement accuracy is ensured. Physical location on the corresponding display screen 16 of each figure scale 161, when in use, can adjust observation eyepiece 23 Focal length, make observation eyepiece 23 observe through slit 291 transmit image in only exist a figure scale 161, thus may be used To establish the mapping relations of the position of observation eyepiece 23 and position on display screen 16.
Because the display screen 16 of virtual implementing helmet general point or so display, the pixel of each display screen 16 are total up to 200 The pixel of ten thousand pixels -3,000,000, pixel in 3x3 grid is relied on merely and shows that different colors can only distinguish 19683 figures Scale 161, it can not directly distinguish a fairly large number of figure scale 161.Here we coordinate 3x3 by the way of two pacing amounts The image scale 161 of checkering obtains physical location corresponding to figure scale 161.It is first step measurement first, it is aobvious in one side 100 regions are marked off in display screen 16, single region shows that multiple 161,100 regions of identical figure scale are shown respectively Different figure scales 161.The figure scale feature shown in region as shown in Figure 8 according to from left to right, from top to bottom according to It is secondary to be:It is red, green, blue, green, red, blue, blue, green, red.In 100 regions, a pixel is vacated between adjacent region and is not shown Any color as border, meanwhile, a pixel is vacated between adjacent figure scale 161 and does not show any color, as figure Shape scale border 162, be advantageous to distinguish adjacent figure scale 161.When a measurement is started, observation unit 2 is first according to the observation To the feature of figure scale 161 determine viewing area positioned at which of 100 regions region.When it is determined that where display Behind region, second step measurement is carried out, display screen 16 is in the way of each display content of figure scale 161 in the region differs To show the region content again, observation unit can pass through physics corresponding to the feature determination for the figure scale 161 observed Position.
The depth of field is shown be configured when, be first turned on clamping device 142, be put into virtual implementing helmet 13 to be placed. Motor 27 is resetted, motor 27 is reached the initial position of one end of eyepiece track 25.Now, preparation is completed before detection.When After processing unit 4 receives the order for starting setting up the depth of field, processing unit 4 calculates sight angle corresponding to the depth of field, motor 27 Observation eyepiece 23 is driven to reach the corresponding sight angle that needing, the depth of field is set, meanwhile, the order display screen 16 of processing unit 4 is shown First step measurement needs the content shown, determines the region that observation eyepiece 23 is observed.Good observation eyepiece 23 to be determined is observed Region after, the order display screen 16 of processing unit 4 shows that second step measurement needs the content that shows, and identifies the figure observed Shape scale 161, thereby determine that to should depth of field position display screen 16 display location.Then observation unit 2 moves to next Individual point of observation, the order test cell 1 of processing unit 4 show detection information, repeat above-mentioned detection process.Until institute's survey in need Untill the depth of field of amount all obtains measurement.We, which deserve to be called, states method to set up as scale setting.
Referring to Fig. 10, the structure of third embodiment of the invention and second embodiment is essentially identical, difference is, Before the depth of field is set, processing unit 4 is carried out at image third embodiment of the invention to the image for needing to set the depth of field first Reason, marks off that the depth of field is identical and display content identical region, and our depth of field are identical and display content identical region is referred to as pure Color region.The boundary point of solid color regions need to be only set when starting setting up, you can it is corresponding aobvious on display screen 16 to obtain the region Show position.Illustrate the picture to be placed 300 as example in Figure 10, picture 300 to be placed include the sun 301, sky 302, Building wall 303, ground 304 and the vehicles 305, the border of the composition sun 301 is a circular boundary 3011.Can be with It is clear that, the sun 301, sky 302, building wall 303 belong to solid color regions, can utilize and measure its boundary point And display location is obtained, and ground 304 is not belonging to overall depth of field identical region due to there is a relation to draw near;Hand over Although logical instrument 305 belongs to depth of field identical region, its display content is incomplete same, is needed for this two parts region It is separately provided.When carrying out region setting, by taking the sun 301 as an example, the dress of virtual implementing helmet depth of field region setting of the present invention Put and depth of field setting is carried out to each point on circular boundary 3011, it is corresponding in display screen to obtain each point on circular boundary 3011 Display location on 16, due to the characteristic of optical mirror slip, each point is corresponding aobvious on display screen 16 on circular boundary 3011 Show that position necessarily forms a closed figure, all points show identical content in the closed figure.Thus unlike Depth of field setting is carried out to each point in the region.
Before depth of field setting is carried out, processing unit 4 carries out image procossing to the image for needing to set the depth of field, marks off Solid color regions.When starting setting up, the boundary point of solid color regions is configured using the method described in second embodiment, obtained The display point of display screen 16 corresponding to going out, identical content is shown in the closed figure for showing point and surrounding of display screen 16.Treat After all solid color regions are provided with, the device that virtual implementing helmet depth of field region of the present invention is set is to being not belonging to pure color area The display object in domain carries out depth of field setting.
Compared with prior art, the present invention is marked off solid color regions and its border is configured using processing unit 4 Method, substantially increase the efficiency of depth of field setting.View directions are calculated using according to depth of field relation, and utilize two pacing amounts Mode coordinate the observation of observation unit 2 to draw the display location of the depth of field to be placed, there is provided a kind of novel setting depth of field Method.Two pacings examination can both improve the accuracy of depth of field setting, can improve the efficiency of setting again.Vacate the picture not shown Element can prevent from obscuring obscuring between figure scale between region as border.Utilize test cell 1, observation unit 2nd, the combination of elementary area 3 and processing unit 4 effectively simply solves the problems, such as depth of field optimization.Driven by motor 27 Observation unit 2 moves along eyepiece track 25, can facilitate from multiple angles from carrying out, to facilitate setting for multiple points of observation Put.
Embodiments of the invention are described above in conjunction with accompanying drawing, but the invention is not limited in above-mentioned tool Body embodiment, above-mentioned embodiment is only schematical, rather than restricted, the ordinary skill of this area Personnel in the case of present inventive concept and scope of the claimed protection is not departed from, can also make under the enlightenment of the present invention Many forms, these are belonged within the protection of the present invention.

Claims (10)

  1. A kind of 1. method set based on virtual implementing helmet depth of field region corresponding to scale, it is characterised in that including following step Suddenly:
    S1:Processing unit is handled picture to be placed, marks off solid color regions;
    S2:Scale setting is carried out to the depth of field corresponding to the boundary point of the solid color regions, draws the boundary point of the solid color regions The closed figure that corresponding display screen display location and the display screen display location surround;
    S3:Identical display content is filled in the closed figure surrounded on the display screen;
    S4:After being provided with after all solid color regions, other regions of picture to be placed are installed with source, laser apparatus.
  2. 2. the method according to claim 1 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, the scale, which is set, to be comprised the following steps:
    S10:The angle position of observation unit, corresponding angle is moved to by the observation unit according to corresponding to calculating depth of field relation Spend position;
    S20:Display information is observed and identified by the way of two pacing amounts, it is determined that it was observed that figure scale;
    S30:The figure scale that the processing unit arrives according to the observation determines the display location of the depth of field to be placed.
  3. 3. the method according to claim 2 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, the two pacings examination comprises the following steps:
    S201:Multiple regions are marked off on unilateral display screen, wherein, show that figure is carved described in multiple identicals in single region Degree, each region shows the different figure scales, the feature for the figure scale that observation unit arrives according to the observation respectively Determine the position of viewing area;
    S202:When it is determined that behind display region, each the figure scale display content is not according to the region for the display screen Identical mode shows the region content again.
  4. 4. the method according to claim 3 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, vacating a pixel between the adjacent figure scale does not show any color, as figure scale border.
  5. 5. the method according to claim 2 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, further comprises step S40:The observation unit moves to next point for needing to set the depth of field, repeats the above steps, Complete setting in need the depth of field.
  6. 6. it is a kind of using method described in claim 1 set the depth of field based on virtual implementing helmet depth of field region corresponding to scale The device of setting, it is characterised in that including test cell, observation unit, elementary area and processing unit, the test cell bag Virtual implementing helmet to be placed, fixed structure are included, the virtual implementing helmet to be placed includes display screen, the fixed structure bag Clamping device and position-limit mechanism are included, the clamping device, which can be opened, is put into the virtual implementing helmet.
  7. 7. the device according to claim 6 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, the clamping device includes torsion spring, and the torsion spring can act on the clamping device after clamping device opening Closure is allowed to the fixation virtual implementing helmet.
  8. 8. the device according to claim 6 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, the observation unit includes observation eyepiece, eyepiece track and motor, and the observation eyepiece can be in the drive of the motor Eyepiece track motion described in lower edge.
  9. 9. the device according to claim 7 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, the observation unit includes movable plate, observation eyepiece, shadow shield, eyepiece track and motor, and the observation eyepiece can be Eyepiece track motion described in the drive lower edge of the motor, the eyepiece track are arranged on the movable plate, the movable plate The observation eyepiece, the motor and the eyepiece track can be driven to move together.
  10. 10. the device according to claim 9 set based on virtual implementing helmet depth of field region corresponding to scale, its feature It is, the shadow shield includes loophole.
CN201710544210.0A 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field zone approach and device corresponding to scale Pending CN107544151A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016213083149 2016-11-30
CN201621308314 2016-11-30

Publications (1)

Publication Number Publication Date
CN107544151A true CN107544151A (en) 2018-01-05

Family

ID=60100336

Family Applications (35)

Application Number Title Priority Date Filing Date
CN201710543865.6A Pending CN107702894A (en) 2016-11-30 2017-07-05 The method and device of virtual reality eyeglass dispersion detection
CN201710544198.3A Pending CN107544149A (en) 2016-11-30 2017-07-05 Region depth of field method to set up and device based on image scale
CN201710544197.9A Pending CN107505708A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field method to set up and device based on image scale
CN201710544195.XA Pending CN107329266A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region is set
CN201710544211.5A Pending CN107300775A (en) 2016-11-30 2017-07-05 The depth of field based on image scale sets the method and device of optimization
CN201710543941.3A Pending CN107390364A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser is set
CN201710543937.7A Pending CN107490861A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization display
CN201710543924.XA Pending CN107357037A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet laser assisted depth of field optimization
CN201710543920.1A Pending CN108121068A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field laser sets the method and device of optimization display
CN201710543918.4A Pending CN107687936A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual implementing helmet dispersion corresponding to scale
CN201710544189.4A Withdrawn CN107357039A (en) 2016-11-30 2017-07-05 Virtual reality eyeglass distortion checking and the method and device of adjustment
CN201710544192.6A Pending CN107544148A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser based on image scale is set
CN201710544212.XA Pending CN107300776A (en) 2016-11-30 2017-07-05 Interpupillary distance depth of field method to set up and device based on image scale
CN201710543923.5A Pending CN107688387A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet dispersion detection
CN201710544200.7A Pending CN107479188A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization
CN201710543919.9A Pending CN107422479A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field method to set up and device corresponding to scale
CN201710543936.2A Pending CN107462991A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is set
CN201710543939.6A Pending CN107526167A (en) 2016-11-30 2017-07-05 The method and device optimized based on depth of field laser corresponding to scale
CN201710544202.6A Pending CN107402448A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with depth of field laser
CN201710544203.0A Pending CN107340595A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field region laser corresponding to scale
CN201710544194.5A Pending CN107329265A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser
CN201710544210.0A Pending CN107544151A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field zone approach and device corresponding to scale
CN201710543938.1A Pending CN107357038A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance and the method and device of depth of field adjustment
CN201710543942.8A Pending CN107329264A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with the depth of field
CN201710544204.5A Withdrawn CN107464221A (en) 2016-11-30 2017-07-05 Based on the method and device of virtual reality eyeglass distortion checking and adjustment corresponding to scale
CN201710543925.4A Pending CN107329263A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is shown
CN201710544201.1A Pending CN107291246A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field measurement based on image scale
CN201710544196.4A Pending CN107315251A (en) 2016-11-30 2017-07-05 Based on the corresponding virtual implementing helmet interpupillary distance of scale and depth of field method to set up and device
CN201710543921.6A Pending CN107300774A (en) 2016-11-30 2017-07-05 Method and device based on the corresponding virtual implementing helmet distortion checking of scale and adjustment
CN201710543922.0A Pending CN107462400A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual reality eyeglass dispersion corresponding to scale
CN201710543944.7A Pending CN107544147A (en) 2016-11-30 2017-07-05 The method and device that depth of field laser based on image scale is set
CN201710544208.3A Pending CN107290854A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance optimizes the method and device of display with the depth of field
CN201710544199.8A Pending CN107544150A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field laser corresponding to scale
CN201710544213.4A Withdrawn CN107478412A (en) 2016-11-30 2017-07-05 Virtual implementing helmet distortion checking and the method and device of adjustment
CN201710544205.XA Pending CN107315252A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region laser is set

Family Applications Before (21)

Application Number Title Priority Date Filing Date
CN201710543865.6A Pending CN107702894A (en) 2016-11-30 2017-07-05 The method and device of virtual reality eyeglass dispersion detection
CN201710544198.3A Pending CN107544149A (en) 2016-11-30 2017-07-05 Region depth of field method to set up and device based on image scale
CN201710544197.9A Pending CN107505708A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field method to set up and device based on image scale
CN201710544195.XA Pending CN107329266A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region is set
CN201710544211.5A Pending CN107300775A (en) 2016-11-30 2017-07-05 The depth of field based on image scale sets the method and device of optimization
CN201710543941.3A Pending CN107390364A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser is set
CN201710543937.7A Pending CN107490861A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization display
CN201710543924.XA Pending CN107357037A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet laser assisted depth of field optimization
CN201710543920.1A Pending CN108121068A (en) 2016-11-30 2017-07-05 Virtual implementing helmet depth of field laser sets the method and device of optimization display
CN201710543918.4A Pending CN107687936A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual implementing helmet dispersion corresponding to scale
CN201710544189.4A Withdrawn CN107357039A (en) 2016-11-30 2017-07-05 Virtual reality eyeglass distortion checking and the method and device of adjustment
CN201710544192.6A Pending CN107544148A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field laser based on image scale is set
CN201710544212.XA Pending CN107300776A (en) 2016-11-30 2017-07-05 Interpupillary distance depth of field method to set up and device based on image scale
CN201710543923.5A Pending CN107688387A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet dispersion detection
CN201710544200.7A Pending CN107479188A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field optimization
CN201710543919.9A Pending CN107422479A (en) 2016-11-30 2017-07-05 Based on virtual implementing helmet depth of field method to set up and device corresponding to scale
CN201710543936.2A Pending CN107462991A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is set
CN201710543939.6A Pending CN107526167A (en) 2016-11-30 2017-07-05 The method and device optimized based on depth of field laser corresponding to scale
CN201710544202.6A Pending CN107402448A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with depth of field laser
CN201710544203.0A Pending CN107340595A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field region laser corresponding to scale
CN201710544194.5A Pending CN107329265A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance optimizes with depth of field laser

Family Applications After (13)

Application Number Title Priority Date Filing Date
CN201710543938.1A Pending CN107357038A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance and the method and device of depth of field adjustment
CN201710543942.8A Pending CN107329264A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet interpupillary distance is set with the depth of field
CN201710544204.5A Withdrawn CN107464221A (en) 2016-11-30 2017-07-05 Based on the method and device of virtual reality eyeglass distortion checking and adjustment corresponding to scale
CN201710543925.4A Pending CN107329263A (en) 2016-11-30 2017-07-05 The method and device that the virtual implementing helmet depth of field is shown
CN201710544201.1A Pending CN107291246A (en) 2016-11-30 2017-07-05 The method and device of virtual implementing helmet depth of field measurement based on image scale
CN201710544196.4A Pending CN107315251A (en) 2016-11-30 2017-07-05 Based on the corresponding virtual implementing helmet interpupillary distance of scale and depth of field method to set up and device
CN201710543921.6A Pending CN107300774A (en) 2016-11-30 2017-07-05 Method and device based on the corresponding virtual implementing helmet distortion checking of scale and adjustment
CN201710543922.0A Pending CN107462400A (en) 2016-11-30 2017-07-05 The method and device detected based on virtual reality eyeglass dispersion corresponding to scale
CN201710543944.7A Pending CN107544147A (en) 2016-11-30 2017-07-05 The method and device that depth of field laser based on image scale is set
CN201710544208.3A Pending CN107290854A (en) 2016-11-30 2017-07-05 Virtual implementing helmet interpupillary distance optimizes the method and device of display with the depth of field
CN201710544199.8A Pending CN107544150A (en) 2016-11-30 2017-07-05 The method and device set based on virtual implementing helmet depth of field laser corresponding to scale
CN201710544213.4A Withdrawn CN107478412A (en) 2016-11-30 2017-07-05 Virtual implementing helmet distortion checking and the method and device of adjustment
CN201710544205.XA Pending CN107315252A (en) 2016-11-30 2017-07-05 The method and device that virtual implementing helmet depth of field region laser is set

Country Status (1)

Country Link
CN (35) CN107702894A (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108008535A (en) * 2017-11-17 2018-05-08 国网山东省电力公司 A kind of augmented reality equipment
CN107977076B (en) * 2017-11-17 2018-11-27 国网山东省电力公司泰安供电公司 A kind of wearable virtual reality device
CN107942517B (en) * 2018-01-02 2020-03-06 京东方科技集团股份有限公司 VR head-mounted display device and display method thereof
CN108303798B (en) * 2018-01-15 2020-10-09 海信视像科技股份有限公司 Virtual reality helmet, virtual reality helmet interpupillary distance adjusting method and device
CN108426702B (en) * 2018-01-19 2020-06-02 华勤通讯技术有限公司 Dispersion measurement device and method of augmented reality equipment
CN108399606B (en) * 2018-02-02 2020-06-26 北京奇艺世纪科技有限公司 Image adjusting method and device
CN108510549B (en) * 2018-03-27 2022-01-04 京东方科技集团股份有限公司 Distortion parameter measuring method, device and system of virtual reality equipment
CN109186957B (en) * 2018-09-17 2024-05-10 浙江晶正光电科技有限公司 High-precision automatic detection equipment for diffusion angle of laser diffusion sheet
CN109557669B (en) * 2018-11-26 2021-10-12 歌尔光学科技有限公司 Method for determining image drift amount of head-mounted display equipment and head-mounted display equipment
US11513346B2 (en) * 2019-05-24 2022-11-29 Beijing Boe Optoelectronics Technology Co., Ltd. Method and apparatus for controlling virtual reality display device
CN110320009A (en) * 2019-06-25 2019-10-11 歌尔股份有限公司 Optical property detection method and detection device
CN113822104B (en) * 2020-07-07 2023-11-03 湖北亿立能科技股份有限公司 Artificial intelligence surface of water detecting system based on virtual scale of many candidates
CN113768240A (en) * 2021-08-30 2021-12-10 航宇救生装备有限公司 Method for adjusting imaging position of display protection helmet
CN114089508B (en) * 2022-01-19 2022-05-03 茂莱(南京)仪器有限公司 Wide-angle projection lens for detecting optical waveguide AR lens
DE102022207774A1 (en) 2022-07-28 2024-02-08 Robert Bosch Gesellschaft mit beschränkter Haftung Method for an automated calibration of a virtual retinal display for data glasses, calibration device and virtual retinal display
CN117214025B (en) * 2023-11-08 2024-01-12 广东德鑫体育产业有限公司 Helmet lens detection device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5619373A (en) * 1995-06-07 1997-04-08 Hasbro, Inc. Optical system for a head mounted display
CN102967473B (en) * 2012-11-30 2015-04-29 奇瑞汽车股份有限公司 Driver front-view measuring device
US10228562B2 (en) * 2014-02-21 2019-03-12 Sony Interactive Entertainment Inc. Realtime lens aberration correction from eye tracking
EP3200148B1 (en) * 2014-10-31 2019-08-28 Huawei Technologies Co., Ltd. Image processing method and device
CN104808342B (en) * 2015-04-30 2017-12-12 杭州映墨科技有限公司 The optical lens structure of the wearable virtual implementing helmet of three-dimensional scenic is presented
US10271042B2 (en) * 2015-05-29 2019-04-23 Seeing Machines Limited Calibration of a head mounted eye tracking system
CN105979243A (en) * 2015-12-01 2016-09-28 乐视致新电子科技(天津)有限公司 Processing method and device for displaying stereo images
CN105979252A (en) * 2015-12-03 2016-09-28 乐视致新电子科技(天津)有限公司 Test method and device
CN105867606A (en) * 2015-12-15 2016-08-17 乐视致新电子科技(天津)有限公司 Image acquisition method and apparatus in virtual reality helmet, and virtual reality helmet
CN105869142A (en) * 2015-12-21 2016-08-17 乐视致新电子科技(天津)有限公司 Method and device for testing imaging distortion of virtual reality helmets
CN105787980B (en) * 2016-03-17 2018-12-25 北京牡丹视源电子有限责任公司 A kind of detection virtual reality shows the method and system of equipment field angle
CN106028013A (en) * 2016-04-28 2016-10-12 努比亚技术有限公司 Wearable device, display device, and display output adjusting method
CN105791789B (en) * 2016-04-28 2019-03-19 努比亚技术有限公司 The method of helmet, display equipment and adjust automatically display output
CN106441212B (en) * 2016-09-18 2020-07-28 京东方科技集团股份有限公司 Device and method for detecting field angle of optical instrument
CN106527733A (en) * 2016-11-30 2017-03-22 深圳市虚拟现实技术有限公司 Virtual-reality helmet distortion fitting-detecting method and device
CN106651954A (en) * 2016-12-27 2017-05-10 天津科技大学 Laser simulation method and device for space sight line benchmark

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
叶玉堂 等: "《光学教程》", 31 August 2005 *
吴启海: "《数码照相机可换镜头使用完全手册》", 31 January 2015 *
高鸿锦 等: "《新型显示技术 下》", 31 August 2014 *

Also Published As

Publication number Publication date
CN107390364A (en) 2017-11-24
CN107329265A (en) 2017-11-07
CN107422479A (en) 2017-12-01
CN107300776A (en) 2017-10-27
CN107291246A (en) 2017-10-24
CN107340595A (en) 2017-11-10
CN107300774A (en) 2017-10-27
CN107300775A (en) 2017-10-27
CN107329264A (en) 2017-11-07
CN107544147A (en) 2018-01-05
CN108121068A (en) 2018-06-05
CN107462991A (en) 2017-12-12
CN107544150A (en) 2018-01-05
CN107478412A (en) 2017-12-15
CN107479188A (en) 2017-12-15
CN107357039A (en) 2017-11-17
CN107315252A (en) 2017-11-03
CN107462400A (en) 2017-12-12
CN107687936A (en) 2018-02-13
CN107290854A (en) 2017-10-24
CN107329263A (en) 2017-11-07
CN107329266A (en) 2017-11-07
CN107490861A (en) 2017-12-19
CN107464221A (en) 2017-12-12
CN107357037A (en) 2017-11-17
CN107526167A (en) 2017-12-29
CN107505708A (en) 2017-12-22
CN107688387A (en) 2018-02-13
CN107402448A (en) 2017-11-28
CN107544148A (en) 2018-01-05
CN107357038A (en) 2017-11-17
CN107315251A (en) 2017-11-03
CN107544149A (en) 2018-01-05
CN107702894A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107544151A (en) Based on virtual implementing helmet depth of field zone approach and device corresponding to scale
Rolland et al. Towards quantifying depth and size perception in virtual environments
CN105764405B (en) Based on the not positive system and method for subjective distance measuring measurement ophthalmic refractive
CN103605208B (en) content projection system and method
CN106605172B (en) Display device, the method for driving display device and electronic equipment
CN110967166A (en) Detection method, detection device and detection system of near-eye display optical system
CN106264441A (en) A kind of novel myopia degree tester and application process
CN108024704B (en) For measuring the method and system of the subjective refraction characteristic of eyes
CN106441822A (en) Virtual reality headset distortion detection method and device
CN106644403A (en) Lens distortion detection method and apparatus
CN106644404A (en) Virtual reality helmet distortion complete machine detection method and device
CN106527733A (en) Virtual-reality helmet distortion fitting-detecting method and device
CN106768878A (en) Optical mirror slip distortion fitting and the method and device for detecting
CN106073695A (en) A kind of Novel astigmatic number of degrees tester and application process
CN206378270U (en) The device of virtual implementing helmet distortion complete machine detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180105

WD01 Invention patent application deemed withdrawn after publication