CN106599930A - Virtual reality space locating feature point selection method - Google Patents

Virtual reality space locating feature point selection method Download PDF

Info

Publication number
CN106599930A
CN106599930A CN201611200542.9A CN201611200542A CN106599930A CN 106599930 A CN106599930 A CN 106599930A CN 201611200542 A CN201611200542 A CN 201611200542A CN 106599930 A CN106599930 A CN 106599930A
Authority
CN
China
Prior art keywords
infrared
processing unit
light
virtual reality
light speckle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611200542.9A
Other languages
Chinese (zh)
Other versions
CN106599930B (en
Inventor
李宗乘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Virtual Reality Technology Co Ltd
Original Assignee
Shenzhen Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Virtual Reality Technology Co Ltd filed Critical Shenzhen Virtual Reality Technology Co Ltd
Priority to CN201611200542.9A priority Critical patent/CN106599930B/en
Publication of CN106599930A publication Critical patent/CN106599930A/en
Application granted granted Critical
Publication of CN106599930B publication Critical patent/CN106599930B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Position Input By Displaying (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a virtual reality space locating feature point selection method. The method comprises the steps that: a condition that all infrared point sources are in an opening state is ensured, a processing unit controls an infrared camera to take the image of a virtual reality helmet and calculates the coordinate of the light spot of each infrared point source image; the processing unit carries out ID identification on each light spot in an imaging picture and finds the IDs corresponding to all light spots; the processing unit calculates the six-degree-of-freedom information of the virtual reality helmet and finds at least four infrared point sources just facing the infrared camera; and the processing unit controls the at least four infrared point sources just facing the infrared camera to be in a lighted state and closes other infrared point sources, and the processing unit controls the infrared camera to take the image of the virtual reality helmet and uses a PnP algorithm to carry out operation positioning of the image.

Description

Virtual reality space location feature point screening technique
Technical field
The present invention relates to field of virtual reality, more particularly, it relates to a kind of screening of virtual reality space location feature point Method.
Background technology
Space orientation is typically positioned and calculated using the pattern of optics or ultrasound wave, is derived by setting up model and is treated Survey the locus of object.General virtual reality space alignment system by the way of infrared point and light sensation photographic head are received come Determine the locus of object, in the front end of nearly eye display device, in positioning, light sensation photographic head catches infrared point to infrared point Further extrapolate the physical coordinates of user in position.If it is known that at least three light sources and the corresponding relation of projection, recall PnP Algorithm is just obtained the locus of the helmet.And realize that the key of this process is just to determine the corresponding light source ID of projection (Identity, serial number).Current virtual reality space is positioned because picture recognition is inaccurate on certain distance and direction Cause determine projection corresponding light source ID when correspondence overlong time and picture recognition it is inaccurate, and then have impact on positioning accuracy and Efficiency.
The content of the invention
In order to solve current virtual realistic space Position location accuracy and inefficient defect, the present invention provides one kind can be with Improve the virtual reality space location feature point screening technique of Position location accuracy and efficiency.
The technical solution adopted for the present invention to solve the technical problems is:A kind of virtual reality space location feature point is provided Screening technique, comprises the following steps:
S1:All infrared spotlights are guaranteed in opening, processing unit control infrared camera shoots virtual reality The image of the helmet, and calculate the coordinate of the light speckle of each infrared spotlight image;
S2:The processing unit carries out ID identifications to each the hot spot point in imaging picture, finds out all smooth speckles correspondences ID;
S3:The processing unit calculates the six-degree-of-freedom information of the virtual implementing helmet, and according to the virtual reality The directional information of the helmet is found out just at least 4 infrared spotlights of the infrared camera;
S4:The processing unit control is just in at least 4 infrared spotlights of the infrared camera lights State, closes remaining described infrared spotlight, and the processing unit controls the infrared camera and shoots the virtual reality The image of the helmet simultaneously carries out computing positioning using PnP algorithms to it.
Preferably, the processing unit is found out near the light speckle for being imaged picture center as central point, Keep the infrared spotlight and 3 infrared spotlights immediate with the infrared spotlight of light speckle correspondence ID In illuminating state, other infrared spotlights are simultaneously closed off.
Preferably, the processing unit controls lighting and closing for the infrared spotlight, it is ensured that on the imaging picture There are 4 light speckles.
Preferably, when the light speckle of the leftmost side in the imaging picture disappears, the processing unit command range is most right The nearest infrared spotlight do not opened of the corresponding infrared spotlight of sidelight speckle is lighted.
Preferably, when the light speckle of the rightmost side in the imaging picture disappears, the processing unit command range is most left The nearest infrared spotlight do not opened of the corresponding infrared spotlight of sidelight speckle is lighted.
Preferably, by the image difference of the imaging picture for comparing present frame and previous frame by determine newly-increased institute State the corresponding smooth speckle of infrared spotlight, the ID of the as newly-increased infrared spotlights lighted of corresponding ID of the light speckle.
Preferably, processing unit historical information with reference to known to previous frame does one to the light speckle of previous frame image Small translation makes the light speckle of previous frame image produce corresponding relation with the light speckle of current frame image, according to the corresponding relation The corresponding ID of each the light speckle for judging to have corresponding relation on current frame image with the historical information of previous frame.
Compared with prior art, the present invention increased fixed using the way for closing the infrared spotlight that complicate can calculating The efficiency of position, being screened using relative position of the infrared spotlight on imaging picture needs the infrared spotlight closed to give A kind of screening technique.Light and just facilitate ID to be also possible to prevent to be imaged picture while identification to the infrared spotlight of infrared camera On light speckle remove imaging picture rapidly and affect sterically defined efficiency.Judge the court of virtual implementing helmet using position location To can quickly find just to the infrared spotlight of infrared camera.By with closest to the infrared point light of imaging center picture Source can be searched out quickly just to infrared photography as the method for centre point searching and its immediate three infrared spotlight Four infrared spotlights of head.When the light amount of speckle being imaged on picture is reduced, processing unit control correspondence infrared spotlight Light and ensure that the quantity of imaging picture glazing speckle is stablized, conveniently positioned, can effectively prevent the quantity of light speckle not The quantity for meeting PnP algorithm needs leads to not situation about positioning.By adding the method for a small translation can to light speckle To ensure that light speckle can correspond to the ID of infrared spotlight when virtual implementing helmet occurs change in location.
Description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is virtual reality space location feature point screening technique principle schematic of the present invention;
Fig. 2 is virtual reality space location feature point screening technique infrared spotlight distribution schematic diagram of the present invention;
Fig. 3 shows one of image that infrared camera shoots;
Fig. 4 shows one of imaging picture of presentation after infrared spotlight is closed;
Fig. 5 shows the two of the image that infrared camera shoots;
Fig. 6 shows the three of the image that infrared camera shoots;
Fig. 7 shows the four of the image that infrared camera shoots;
Fig. 8 shows the five of the image that infrared camera shoots.
Specific embodiment
In order to solve current virtual realistic space Position location accuracy and inefficient defect, the present invention provides one kind can be with Improve the virtual reality space location feature point screening technique of Position location accuracy and efficiency.
In order to be more clearly understood to the technical characteristic of the present invention, purpose and effect, now compare accompanying drawing and describe in detail The specific embodiment of the present invention.
Refer to Fig. 1-Fig. 2.Virtual reality space location feature point screening technique of the present invention includes virtual implementing helmet 10th, infrared camera 20 and processing unit 30, infrared camera 20 is electrically connected with processing unit 30.Virtual implementing helmet 10 is wrapped Front panel 11 is included, is distributed with the front panel 11 and four, upper and lower, left and right side panel of virtual implementing helmet 10 multiple infrared Point source 13.The quantity of infrared spotlight 13 will at least meet the minimum number that PnP algorithms can run.Infrared spotlight 13 Shape has no particular limits.In order to illustrate, we take quantity of the infrared spotlight 13 on front panel 11 for 7,7 The shape of infrared spotlight composition approximate " w ".Multiple infrared spotlights 13 can pass through the firmware interface of virtual implementing helmet 10 Light as needed or close.Infrared spotlight 13 on virtual implementing helmet 10 is by the shooting of infrared camera 20 in figure As upper formation luminous point, due to the bandpass characteristics of infrared camera, only infrared spotlight energy 13 forms spot projection on image, Remainder all forms uniform background image.Infrared spotlight 13 on virtual implementing helmet 10 can form light on image Speckle.
Fig. 3-Fig. 4 is referred to, Fig. 3 shows the imaging picture 41 of the infrared spotlight 13 that infrared camera 20 shoots.Really All infrared spotlights 13 are protected in opening, the control infrared camera 20 of processing unit 30 shoots virtual implementing helmet 10 Image, has seven light speckles on imaging picture 41.Position of the processing unit 30 first according to light speckle on imaging picture 41 The coordinate of each light speckle is calculated, then ID identifications is carried out to each the hot spot point in imaging picture 41, find out all hot spots The corresponding ID of point, and draw the six-degree-of-freedom information of virtual implementing helmet 10 using PnP algorithms.Processing unit 30 is according to virtual existing The six-degree-of-freedom information of the real helmet 10 judges the relative position of virtual implementing helmet 10 and infrared camera 20, and keeps virtual existing Just opening is in at least 4 infrared spotlights 13 of infrared camera 20 on the real helmet 10, closes other infrared point light Source 13.4 infrared spotlights 13 opened are screened in the following manner:Processing unit 30 is found out near imaging picture 41 The light speckle of center as central point, keep light speckle correspondence ID infrared spotlight 13 and with the infrared spotlight Immediate 3 infrared spotlights 13 are in illuminating state, simultaneously close off other infrared spotlights 13.
Now, 4 light speckles are only existed on the imaging picture 41 of next frame, processing unit 30 can track each hot spot Correspondence ID is put and demarcates, concrete grammar is:In space orientation, due to the sampling time of every frame it is sufficiently small, generally 30ms, institute With the position difference very little of each the light speckle on each light speckle and present frame of generally previous frame, processing unit 30 The historical information with reference to known to previous frame does a small translation to the light speckle of previous frame image and makes the light of previous frame image Speckle produces corresponding relation with the light speckle of current frame image, can determine whether according to the historical information of the corresponding relation and previous frame There is the corresponding ID of each light speckle of corresponding relation on current frame image.In the case of known to all smooth speckle correspondence ID, place Reason unit 30 draws the space orientation position of virtual implementing helmet 10 by directly invoking PnP algorithms.
Fig. 5-Fig. 8 is referred to, number of spots is reduced in virtual implementing helmet 10 causes to be imaged picture 41 due to movement When, the control virtual implementing helmet 10 of processing unit 30 is opened corresponding infrared spotlight 13 and is supplemented, and is kept into as on picture 41 The quantity of light speckle is 4.Specific way is, when the light speckle of the leftmost side in imaging picture 41 is due to virtual implementing helmet 10 Motion and when disappearing, the corresponding infrared spotlight 13 of the command range rightmost side light speckle of processing unit 30 is nearest not to be opened Infrared spotlight 13 is lighted;When the light speckle of the rightmost side in imaging picture 41 is disappeared due to the motion of virtual implementing helmet 10 When, the nearest infrared spotlight 13 do not opened of the corresponding infrared spotlight 13 of the command range leftmost side light speckle of processing unit 30 Light, be kept into as there are 4 light speckles in picture 41, it is ensured that PnP algorithms can be with trouble-free operation.For the infrared point light newly lighted Source 13, by determining the newly-increased corresponding smooth speckle of infrared spotlight 13 by the image difference for comparing present frame and previous frame, The ID of the as newly-increased infrared spotlights 13 lighted of the corresponding ID of the light speckle.
After the completion of ID identifications, processing unit 30 recalls the space orientation position that PnP algorithms are just obtained the helmet, and PnP is calculated It is owned by France in prior art, the present invention is repeated no more.
Compared with prior art, the present invention increased using the way for closing the infrared spotlight 13 that complicate can calculating The efficiency of positioning, using relative position of the infrared spotlight 13 on imaging picture 41 infrared spotlight for needing to close is screened 13 give a kind of screening technique.Light and just facilitate the ID can also while identification to the infrared spotlight 13 of infrared camera 20 Preventing the light speckle being imaged on picture 41 from removing imaging picture 41 rapidly affects sterically defined efficiency.Judged using position location The direction of virtual implementing helmet 10, can quickly find just to the infrared spotlight 13 of infrared camera 20.By with closest The infrared spotlight 13 at imaging picture 41 center is used as centre point searching and the method for its immediate three infrared spotlight 13 Can search out quickly just to four infrared spotlights 13 of infrared camera 20.Light amount of speckle on imaging picture 41 During reduction, the control correspondence infrared spotlight 13 of processing unit 30 lights stablizing for the quantity of the guarantee imaging glazing speckle of picture 41, Conveniently positioned, can effectively prevent light speckle quantity be unsatisfactory for PnP algorithms needs quantity lead to not position feelings Condition.Can ensure light when virtual implementing helmet 10 occurs change in location by adding the method for a small translation to light speckle Speckle can correspond to the ID of infrared spotlight 13.
Embodiments of the invention are described above in conjunction with accompanying drawing, but be the invention is not limited in above-mentioned concrete Embodiment, above-mentioned specific embodiment is only schematic, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of without departing from present inventive concept and scope of the claimed protection, can also make a lot Form, these are belonged within the protection of the present invention.

Claims (7)

1. a kind of virtual reality space location feature point screening technique, it is characterised in that comprise the following steps:
S1:All infrared spotlights are guaranteed in opening, processing unit control infrared camera shoots virtual implementing helmet Image, and calculate the coordinate of the light speckle of each infrared spotlight image;
S2:The processing unit carries out ID identifications to each the hot spot point in imaging picture, finds out the corresponding ID of all smooth speckles;
S3:The processing unit calculates the six-degree-of-freedom information of the virtual implementing helmet, and according to the virtual implementing helmet Directional information find out just at least 4 infrared spotlights of the infrared camera;
S4:The processing unit control is just in illuminating state at least 4 infrared spotlights of the infrared camera, Remaining described infrared spotlight is closed, the processing unit controls the infrared camera and shoots the virtual implementing helmet Image simultaneously carries out computing positioning using PnP algorithms to it.
2. virtual reality space location feature point screening technique according to claim 1, it is characterised in that the process list Unit is found out near the light speckle for being imaged picture center as central point, keeps the light speckle to correspond to the described red of ID Outer point source and 3 infrared spotlights immediate with the infrared spotlight are in illuminating state, simultaneously close off other The infrared spotlight.
3. the virtual reality space location feature point screening technique according to claim, it is characterised in that the process list Unit controls lighting and closing for the infrared spotlight, it is ensured that have 4 light speckles on the imaging picture.
4. virtual reality space location feature point screening technique according to claim 3, it is characterised in that when the imaging When the light speckle of the leftmost side disappears in picture, the corresponding infrared point light of the processing unit command range rightmost side light speckle The nearest infrared spotlight do not opened in source is lighted.
5. virtual reality space location feature point screening technique according to claim 3, it is characterised in that when the imaging When the light speckle of the rightmost side disappears in picture, the corresponding infrared point light of the processing unit command range leftmost side light speckle The nearest infrared spotlight do not opened in source is lighted.
6. the virtual reality space location feature point screening technique according to any one of claim 4-5, it is characterised in that logical Cross and compare present frame and determine that the newly-increased infrared spotlight is corresponding by the image difference of the imaging picture of previous frame Light speckle, the ID of the as newly-increased infrared spotlight lighted of the corresponding ID of the light speckle.
7. the virtual reality space location feature point screening technique according to any one of claim 1-5, it is characterised in that institute Stating processing unit historical information with reference to known to previous frame and doing a small translation to the light speckle of previous frame image makes upper one The light speckle of two field picture produces corresponding relation with the light speckle of current frame image, is believed according to the history of the corresponding relation and previous frame Breath judges the corresponding ID of each the light speckle for having corresponding relation on current frame image.
CN201611200542.9A 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method Expired - Fee Related CN106599930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611200542.9A CN106599930B (en) 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611200542.9A CN106599930B (en) 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method

Publications (2)

Publication Number Publication Date
CN106599930A true CN106599930A (en) 2017-04-26
CN106599930B CN106599930B (en) 2021-06-11

Family

ID=58602663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611200542.9A Expired - Fee Related CN106599930B (en) 2016-12-22 2016-12-22 Virtual reality space positioning feature point screening method

Country Status (1)

Country Link
CN (1) CN106599930B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390952A (en) * 2017-07-04 2017-11-24 深圳市虚拟现实科技有限公司 Virtual reality handle characteristic point space-location method
WO2018113433A1 (en) * 2016-12-22 2018-06-28 深圳市虚拟现实技术有限公司 Method for screening and spatially locating virtual reality feature points
CN108414195A (en) * 2018-01-17 2018-08-17 深圳市绚视科技有限公司 Detection method, device, system and the storage device of light source emitter to be measured

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAO, Q等: "Eye gazing direction inspection based on image processing technique", 《OPTICAL DESIGN AND TESTING II, PTS 1 AND 2》 *
刘圭圭等: "双目视觉在助老助残机器人定位***中的应用", 《微型机与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113433A1 (en) * 2016-12-22 2018-06-28 深圳市虚拟现实技术有限公司 Method for screening and spatially locating virtual reality feature points
CN107390952A (en) * 2017-07-04 2017-11-24 深圳市虚拟现实科技有限公司 Virtual reality handle characteristic point space-location method
CN108414195A (en) * 2018-01-17 2018-08-17 深圳市绚视科技有限公司 Detection method, device, system and the storage device of light source emitter to be measured

Also Published As

Publication number Publication date
CN106599930B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
JP7004017B2 (en) Object tracking system, object tracking method, program
US9317134B2 (en) Proximity object tracker
US10990830B2 (en) Auto-calibration of tracking systems
JP7051315B2 (en) Methods, systems, and non-temporary computer-readable recording media for measuring ball rotation.
US9164723B2 (en) Virtual lens-rendering for augmented reality lens
CN106599930A (en) Virtual reality space locating feature point selection method
US20140253513A1 (en) Operation detection device, operation detection method and projector
US10317177B2 (en) Automatic dartboard scoring system
CN106599929A (en) Virtual reality feature point screening spatial positioning method
CN110221732B (en) Touch projection system and touch action identification method
US20150286866A1 (en) Apparatus and method for analyzing trajectory
CN106774992A (en) The point recognition methods of virtual reality space location feature
US10789729B2 (en) System and method(s) for determining projectile impact location
CN105912145A (en) Laser pen mouse system and image positioning method thereof
US11270456B2 (en) Spatial positioning method, spatial positioning device, spatial positioning system and computer readable medium
CN105759963A (en) Method for positioning motion trail of human hand in virtual space based on relative position relation
CN103903282A (en) Target tracking method based on LabVIEW
CN106648147A (en) Space positioning method and system for virtual reality characteristic points
JP2021060868A5 (en)
CN108594995A (en) A kind of electronic device method and electronic equipment based on gesture identification
EP2459288A2 (en) Automated enhancements for billiards and the like
CN111126178B (en) Continuous distance estimation method for infrared-visible light binocular pedestrian body multi-component fusion
CN104667527A (en) Method and system for recognizing different shooting points on screen by infrared laser
KR102041279B1 (en) system, method for providing user interface of virtual interactive contents and storage of computer program therefor
KR102437606B1 (en) Augmentation Information Simulator for Providing Enhanced UI/UX of Realistic HUD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210611