CN106599929A - Virtual reality feature point screening spatial positioning method - Google Patents

Virtual reality feature point screening spatial positioning method Download PDF

Info

Publication number
CN106599929A
CN106599929A CN201611199871.6A CN201611199871A CN106599929A CN 106599929 A CN106599929 A CN 106599929A CN 201611199871 A CN201611199871 A CN 201611199871A CN 106599929 A CN106599929 A CN 106599929A
Authority
CN
China
Prior art keywords
infrared
image
virtual reality
processing unit
light speckle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611199871.6A
Other languages
Chinese (zh)
Other versions
CN106599929B (en
Inventor
李宗乘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Virtual Reality Technology Co Ltd
Original Assignee
Shenzhen Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Virtual Reality Technology Co Ltd filed Critical Shenzhen Virtual Reality Technology Co Ltd
Priority to CN201611199871.6A priority Critical patent/CN106599929B/en
Publication of CN106599929A publication Critical patent/CN106599929A/en
Priority to PCT/CN2017/109794 priority patent/WO2018113433A1/en
Application granted granted Critical
Publication of CN106599929B publication Critical patent/CN106599929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a virtual reality feature point screening spatial positioning method, which comprises the steps of: training a neural network by utilizing a preprocessed picture; maintaining infrared point light sources of a virtual reality helmet in an on state, and photographing by means of an infrared camera; pre-processing the picture to obtain an preprocessed image; and inputting the obtained preprocessed image into a neuron so as to obtain an ID of the infrared point light source corresponding to each light spot. Compared with the prior art, a neural network algorithm is introduced into the virtual reality spatial positioning method, a method for determining light sport ID is provided, and the method is precise and efficient. The virtual reality feature point screening spatial positioning method prevents the diversification of the picture from influencing the recognition accuracy rate through preprocessing the trained image and the tested image, subjects the diversified picture to standardization processing, and greatly improves the success rate and precision of ID recognition.

Description

Virtual reality characteristic point screens space-location method
Technical field
The present invention relates to field of virtual reality, more particularly, it relates to a kind of screening space orientation of virtual reality characteristic point Method.
Background technology
Space orientation is typically positioned and calculated using the pattern of optics or ultrasound wave, is derived by setting up model and is treated Survey the locus of object.General virtual reality space alignment system by the way of infrared point and light sensation photographic head are received come Determine the locus of object, in the front end of nearly eye display device, in positioning, light sensation photographic head catches infrared point to infrared point Further extrapolate the physical coordinates of user in position.If it is known that at least three light sources and the corresponding relation of projection, recall PnP Algorithm is just obtained the locus of the helmet.And realize that the key of this process is just to determine the corresponding light source ID of projection (Identity, serial number).Current virtual reality space is positioned because picture recognition is inaccurate on certain distance and direction Cause determine projection corresponding light source ID when correspondence overlong time and picture recognition it is inaccurate, and then have impact on positioning accuracy and Efficiency.
The content of the invention
In order to solve current virtual realistic space Position location accuracy and inefficient defect, the present invention provides one kind can be with Improve the virtual reality characteristic point screening space-location method of Position location accuracy and efficiency.
The technical solution adopted for the present invention to solve the technical problems is:A kind of virtual reality characteristic point screening space is provided Localization method, comprises the following steps:
S1:All infrared spotlights are guaranteed in opening, processing unit control infrared camera shoots virtual reality The image of the helmet, and calculate the coordinate of the light speckle of each infrared spotlight image;
S2:The processing unit carries out ID identifications to each the hot spot point in imaging picture, finds out all smooth speckles correspondences ID;
S3:The infrared spotlight at least 4 for processing image control correspondence ID is in illuminating state, closes remaining The infrared spotlight, the processing unit controls the image and profit that the infrared camera shoots the virtual implementing helmet Computing positioning is carried out to it with PnP algorithms;
S4:When the number for being imaged picture glazing speckle is unsatisfactory for the quantity of PnP algorithms needs, S1 to S3 is re-executed.
Preferably, the imaging picture is shaped as rectangle, and the long edge lengths of rectangle of the imaging picture are d, the place Reason unit calculates light speckle distance between any two, therefrom selects ultimate range d ', as d ' > d/2, the processing unit is looked for Go out the light speckle near the imaging picture center, keep light speckle correspondence ID the infrared spotlight and with Immediate 3 infrared spotlights of the infrared spotlight are in illuminating state, simultaneously close off other infrared spotlights.
Preferably, the imaging picture is shaped as rectangle, and the long edge lengths of rectangle of the imaging picture are d, the place Reason unit calculates light speckle distance between any two, therefrom selects ultimate range d ', as d ' < d/2, the processing unit is looked for At least 4 outer infrared spotlights of relative position and it is remain in the corresponding infrared spotlight of glossing up point, Close other infrared spotlights.
Preferably, processing unit historical information with reference to known to previous frame does one to the light speckle of previous frame image Small translation makes the light speckle of previous frame image produce corresponding relation with the light speckle of current frame image, according to the corresponding relation The corresponding ID of each the light speckle for judging to have corresponding relation on current frame image with the historical information of previous frame.
Compared with prior art, the present invention increased fixed using the way for closing the infrared spotlight that complicate can calculating The efficiency of position, being screened using relative position of the infrared spotlight on imaging picture needs the infrared spotlight closed to give A kind of screening technique.How distinguished to hot spot using ultimate range between light speckle and the method for image length of a film back gauge contrast The closing of the corresponding infrared spotlight of point is accepted or rejected, and simple, operability is very strong.When ultimate range is more than between light speckle During the half of image length of a film edge lengths, lighted using the corresponding infrared spotlight of 4 light speckles for choosing partially middle part, can be compared with Calculated using PnP algorithms well, while also ensure that the light speckle for positioning will not rapidly remove imaging picture, prevent ID identifications are repeated and take considerable time.When ultimate range is less than the half of image length of a film edge lengths between light speckle, Lighted using the corresponding infrared spotlight of at least 4 light speckles for choosing partially outside, preferably can be counted using PnP algorithms Calculate, while also ensure that the distance between light speckle is sufficiently large, larger error will not be produced by light due to reasons such as pixels Speckle does small translation making current light speckle corresponding with the light speckle of previous frame image, it is to avoid ID is repeated to be recognized, is saved The substantial amounts of time is saved.
Description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described, in accompanying drawing:
Fig. 1 is virtual reality characteristic point screening space-location method principle schematic of the present invention;
Fig. 2 is virtual reality characteristic point screening space-location method infrared spotlight distribution schematic diagram of the present invention;
Fig. 3 shows one of image that infrared camera shoots;
Fig. 4 shows one of imaging picture of presentation after infrared spotlight is closed;
Fig. 5 shows the two of the image that infrared camera shoots;
Fig. 6 shows the two of the imaging picture presented after infrared spotlight is closed.
Specific embodiment
In order to solve current virtual realistic space Position location accuracy and inefficient defect, the present invention provides one kind can be with Improve the virtual reality characteristic point screening space-location method of Position location accuracy and efficiency.
In order to be more clearly understood to the technical characteristic of the present invention, purpose and effect, now compare accompanying drawing and describe in detail The specific embodiment of the present invention.
Refer to Fig. 1-Fig. 2.Virtual reality characteristic point screening space-location method of the present invention includes virtual implementing helmet 10th, infrared camera 20 and processing unit 30, infrared camera 20 is electrically connected with processing unit 30.Virtual implementing helmet 10 is wrapped Front panel 11 is included, is distributed with the front panel 11 and four, upper and lower, left and right side panel of virtual implementing helmet 10 multiple infrared Point source 13.The quantity of infrared spotlight 13 will at least meet the minimum number that PnP algorithms can run.Infrared spotlight 13 Shape has no particular limits.In order to illustrate, we take quantity of the infrared spotlight 13 on front panel 11 for 7,7 The shape of infrared spotlight composition approximate " w ".Multiple infrared spotlights 13 can pass through the firmware interface of virtual implementing helmet 10 Light as needed or close.Infrared spotlight 13 on virtual implementing helmet 10 is by the shooting of infrared camera 20 in figure As upper formation luminous point, due to the bandpass characteristics of infrared camera, only infrared spotlight energy 13 forms spot projection on image, Remainder all forms uniform background image.Infrared spotlight 13 on virtual implementing helmet 10 can form light on image Speckle.
Fig. 3-Fig. 4 is referred to, Fig. 3 shows the imaging picture 41 of the infrared spotlight 13 that infrared camera 20 shoots, into As picture 41 is rectangle, rectangle longer sides length is d.All infrared spotlights are guaranteed in opening, processing unit 30 is controlled Infrared camera processed 20 shoots the image of virtual implementing helmet 10, has seven light speckles on imaging picture 41.Processing unit 30 The coordinate of each light speckle is gone out according to position calculation of the light speckle on imaging picture 41, and calculate two-by-two between light speckle away from From therefrom selecting ultimate range d '.As d ' > d/2, illustrate that now the shared scope on imaging picture 41 of light speckle is larger, Can take considerable time because each light speckle carries out successively ID (Identity, serial number) identifications and the operation of PnP algorithms, only take A portion point can also meet the needs of PnP algorithms, and now, processing unit 30 is first to each light in imaging picture 41 Speckle carries out ID identifications, finds out the corresponding ID of all smooth speckles, then finds out the hot spot near imaging picture 41 center Point as central point, keep light speckle correspondence ID infrared spotlight 13 and it is immediate with the infrared spotlight 3 it is red Outer point source 13 is in illuminating state, simultaneously closes off other infrared spotlights 13, now, on the imaging picture 41 of next frame only There are 4 light speckles, processing unit 30 can track each light speckle and demarcate correspondence ID, and concrete grammar is:In space orientation When, due to the sampling time of every frame it is sufficiently small, generally 30ms, so generally each light speckle of previous frame and current The position difference very little of each the light speckle on frame, processing unit 30 combines historical information known to previous frame to previous frame image Light speckle do a small translation and make the light speckle of previous frame image produce corresponding relation with the light speckle of current frame image, Each the light speckle that can determine whether to have corresponding relation on current frame image according to the historical information of the corresponding relation and previous frame Correspondence ID.In the case of known to all smooth speckle correspondence ID, processing unit 30 draws virtual existing by directly invoking PnP algorithms The space orientation position of the real helmet 10.Number of spots is less than in virtual implementing helmet 10 causes to be imaged picture 41 due to movement During number of spots needed for PnP algorithms, the infrared spotlight 13 that said method selects new needs to light is re-executed.
Fig. 5-Fig. 6 is referred to, in Figure 5, there are seven light speckles on imaging picture 41.Processing unit 30 is according to light speckle Position calculation on imaging picture 41 goes out the coordinate of each light speckle, and calculates the distance between light speckle two-by-two, Cong Zhongxuan Go out ultimate range d '.As d ' < d/2, illustrate that now the shared scope on imaging picture 41 of light speckle is less, due to each Light speckle carries out successively ID identifications and the operation of PnP algorithms and can take considerable time that only taking a portion point can also meet PnP The needs of algorithm, now, processing unit 30 carries out ID identifications to each the hot spot point in image first, finds out all smooth speckles pair The ID for answering, then finds out at least 4 outer infrared spotlights 13 of relative position in the corresponding infrared spotlights 13 of these ID, protects These infrared spotlights 13 are held in illuminating state, other infrared spotlights 13 are closed.Do so can ensure that imaging picture 41 On light speckle will not be overstocked, so as to affect measure degree of accuracy.Processing unit 30 draws virtually by directly invoking PnP algorithms The space orientation position of the real helmet 10.Number of spots is less than in virtual implementing helmet 10 causes to be imaged picture 41 due to movement During number of spots needed for PnP algorithms, the infrared spotlight 13 that said method selects new needs to light is re-executed.
After the completion of ID identifications, processing unit 30 recalls the space orientation position that PnP algorithms are just obtained the helmet, and PnP is calculated It is owned by France in prior art, the present invention is repeated no more.
Compared with prior art, the present invention increased using the way for closing the infrared spotlight 13 that complicate can calculating The efficiency of positioning, using relative position of the infrared spotlight 13 on imaging picture 41 infrared spotlight for needing to close is screened 13 give a kind of screening technique.Distinguished with the method for imaging picture 41 long back gauge contrast using ultimate range between light speckle The how closing to the corresponding infrared spotlight 13 of light speckle is accepted or rejected, and simple, operability is very strong.When between light speckle When ultimate range is more than the half for being imaged the long edge lengths of picture 41, using the corresponding infrared point of 4 light speckles for choosing middle part partially Light source 13 is lighted, and preferably can be calculated using PnP algorithms, while also ensure that the light speckle for positioning will not be rapid Imaging picture 41 is removed, prevents from ID identifications being repeated and taking considerable time.When ultimate range is less than image between light speckle During the half of the long edge lengths of piece 41, lighted using the corresponding infrared spotlight 13 of at least 4 light speckles for choosing partially outside, can be with Preferably calculated using PnP algorithms, while also ensure that the distance between light speckle is sufficiently large, will not be due to originals such as pixels Make current light speckle corresponding with the light speckle of previous frame image because producing larger error by doing small translation to light speckle, keep away Exempt from that ID identifications are repeated, saved the substantial amounts of time.
Embodiments of the invention are described above in conjunction with accompanying drawing, but be the invention is not limited in above-mentioned concrete Embodiment, above-mentioned specific embodiment is only schematic, rather than restricted, one of ordinary skill in the art Under the enlightenment of the present invention, in the case of without departing from present inventive concept and scope of the claimed protection, can also make a lot Form, these are belonged within the protection of the present invention.

Claims (4)

1. a kind of virtual reality characteristic point screens space-location method, it is characterised in that comprise the following steps:
S1:All infrared spotlights are guaranteed in opening, processing unit control infrared camera shoots virtual implementing helmet Image, and calculate the coordinate of the light speckle of each infrared spotlight image;
S2:The processing unit carries out ID identifications to each the hot spot point in imaging picture, finds out the corresponding ID of all smooth speckles;
S3:The infrared spotlight at least 4 for processing image control correspondence ID is in illuminating state, closes remaining institute Infrared spotlight is stated, the processing unit controls the infrared camera and shoots the image of the virtual implementing helmet and utilize PnP algorithms carry out computing positioning to it;
S4:When the number for being imaged picture glazing speckle is unsatisfactory for the quantity of PnP algorithms needs, S1 to S3 is re-executed.
2. virtual reality characteristic point according to claim 1 screens space-location method, it is characterised in that the image Piece is shaped as rectangle, the long edge lengths of rectangle of the imaging picture are d, and the processing unit calculates light speckle between any two Distance, therefrom selects ultimate range d ', as d ' > d/2, the processing unit is found out near the imaging picture centre bit The light speckle put, keeps the infrared spotlight of light speckle correspondence ID and 3 institutes immediate with the infrared spotlight Infrared spotlight is stated in illuminating state, other infrared spotlights are simultaneously closed off.
3. virtual reality characteristic point according to claim 1 screens space-location method, it is characterised in that the image Piece is shaped as rectangle, the long edge lengths of rectangle of the imaging picture are d, and the processing unit calculates light speckle between any two Distance, therefrom selects ultimate range d ', as d ' < d/2, the processing unit finds out the corresponding infrared point light of light speckle At least 4 outer infrared spotlights of relative position and it is remain in source, closes other infrared spotlights.
4. the virtual reality characteristic point according to any one of claim 1-3 screens space-location method, it is characterised in that institute Stating processing unit historical information with reference to known to previous frame and doing a small translation to the light speckle of previous frame image makes upper one The light speckle of two field picture produces corresponding relation with the light speckle of current frame image, is believed according to the history of the corresponding relation and previous frame Breath judges the corresponding ID of each the light speckle for having corresponding relation on current frame image.
CN201611199871.6A 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method Active CN106599929B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611199871.6A CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method
PCT/CN2017/109794 WO2018113433A1 (en) 2016-12-22 2017-11-07 Method for screening and spatially locating virtual reality feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611199871.6A CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method

Publications (2)

Publication Number Publication Date
CN106599929A true CN106599929A (en) 2017-04-26
CN106599929B CN106599929B (en) 2021-03-19

Family

ID=58601028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611199871.6A Active CN106599929B (en) 2016-12-22 2016-12-22 Virtual reality feature point screening space positioning method

Country Status (2)

Country Link
CN (1) CN106599929B (en)
WO (1) WO2018113433A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562189A (en) * 2017-07-21 2018-01-09 广州励丰文化科技股份有限公司 A kind of space-location method and service equipment based on binocular camera
WO2018113433A1 (en) * 2016-12-22 2018-06-28 深圳市虚拟现实技术有限公司 Method for screening and spatially locating virtual reality feature points
CN110555879A (en) * 2018-05-31 2019-12-10 京东方科技集团股份有限公司 Space positioning method, device, system and computer readable medium thereof
CN114115517A (en) * 2020-08-25 2022-03-01 宏达国际电子股份有限公司 Object tracking method and object tracking device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113739803B (en) * 2021-08-30 2023-11-21 中国电子科技集团公司第五十四研究所 Indoor and underground space positioning method based on infrared datum points

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599929B (en) * 2016-12-22 2021-03-19 深圳市虚拟现实技术有限公司 Virtual reality feature point screening space positioning method
CN106599930B (en) * 2016-12-22 2021-06-11 深圳市虚拟现实技术有限公司 Virtual reality space positioning feature point screening method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016132371A1 (en) * 2015-02-22 2016-08-25 Technion Research & Development Foundation Limited Gesture recognition using multi-sensory data
CN106152937A (en) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 Space positioning apparatus, system and method
CN106019265A (en) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 Multi-target positioning method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAO, Q等: "Eye gazing direction inspection based on image processing technique", 《OPTICAL DESIGN AND TESTING II, PTS 1 AND 2》 *
刘圭圭等: "双目视觉在助老助残机器人定位***中的应用", 《微型机与应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113433A1 (en) * 2016-12-22 2018-06-28 深圳市虚拟现实技术有限公司 Method for screening and spatially locating virtual reality feature points
CN107562189A (en) * 2017-07-21 2018-01-09 广州励丰文化科技股份有限公司 A kind of space-location method and service equipment based on binocular camera
CN110555879A (en) * 2018-05-31 2019-12-10 京东方科技集团股份有限公司 Space positioning method, device, system and computer readable medium thereof
US11270456B2 (en) 2018-05-31 2022-03-08 Beijing Boe Optoelectronics Technology Co., Ltd. Spatial positioning method, spatial positioning device, spatial positioning system and computer readable medium
CN110555879B (en) * 2018-05-31 2023-09-08 京东方科技集团股份有限公司 Space positioning method, device, system and computer readable medium thereof
CN114115517A (en) * 2020-08-25 2022-03-01 宏达国际电子股份有限公司 Object tracking method and object tracking device
CN114115517B (en) * 2020-08-25 2024-04-02 宏达国际电子股份有限公司 Object tracking method and object tracking device

Also Published As

Publication number Publication date
WO2018113433A1 (en) 2018-06-28
CN106599929B (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN106599929A (en) Virtual reality feature point screening spatial positioning method
CN104537292B (en) The method and system detected for the electronic deception of biological characteristic validation
KR101169574B1 (en) Measurement apparatus for movement information of moving object
CN107633165B (en) 3D face identity authentication method and device
WO2011142495A1 (en) Apparatus and method for iris recognition using multiple iris templates
WO2018153311A1 (en) Virtual reality scene-based business verification method and device
JP7051315B2 (en) Methods, systems, and non-temporary computer-readable recording media for measuring ball rotation.
JP2019506694A (en) Biometric analysis system and method
CN111344703B (en) User authentication device and method based on iris recognition
CN106845414A (en) For the method and system of the quality metric of biological characteristic validation
CN110909634A (en) Visible light and double infrared combined rapid in vivo detection method
US20230041573A1 (en) Image processing method and apparatus, computer device and storage medium
CN111582238A (en) Living body detection method and device applied to face shielding scene
US20210256244A1 (en) Method for authentication or identification of an individual
CN104954750A (en) Data processing method and device for billiard system
CN106774992A (en) The point recognition methods of virtual reality space location feature
JP2020129175A (en) Three-dimensional information generation device, biometric authentication device, and three-dimensional image generation device
CN106648147A (en) Space positioning method and system for virtual reality characteristic points
CN108537103A (en) The living body faces detection method and its equipment measured based on pupil axle
CN108197549A (en) Face identification method and terminal based on 3D imagings
JP4659722B2 (en) Human body specific area extraction / determination device, human body specific area extraction / determination method, human body specific area extraction / determination program
CN106708257A (en) Game interaction method and device
CN106599930A (en) Virtual reality space locating feature point selection method
CN104604219B (en) Image processing apparatus and image processing method
WO2023218692A1 (en) Display control device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant