CN105892638A - Virtual reality interaction method, device and system - Google Patents

Virtual reality interaction method, device and system Download PDF

Info

Publication number
CN105892638A
CN105892638A CN201510870209.8A CN201510870209A CN105892638A CN 105892638 A CN105892638 A CN 105892638A CN 201510870209 A CN201510870209 A CN 201510870209A CN 105892638 A CN105892638 A CN 105892638A
Authority
CN
China
Prior art keywords
infrared
fixed point
information
infrared image
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510870209.8A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leshi Zhixin Electronic Technology Tianjin Co Ltd
Original Assignee
Leshi Zhixin Electronic Technology Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leshi Zhixin Electronic Technology Tianjin Co Ltd filed Critical Leshi Zhixin Electronic Technology Tianjin Co Ltd
Priority to CN201510870209.8A priority Critical patent/CN105892638A/en
Publication of CN105892638A publication Critical patent/CN105892638A/en
Priority to PCT/CN2016/096983 priority patent/WO2017092432A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The invention provides a virtual reality interaction method, device and system. The method comprises the following steps: collecting at least two first infrared images of a calibration object through a first infrared camera, collecting at least two second infrared images of the calibration object through a second infrared camera, extracting the corresponding characteristic information of each calibration point in each first infrared image and the corresponding characteristic information of each calibration point in each second infrared image, utilizing the characteristic information of each calibration point in each first infrared image and the characteristic information of each calibration point in each second infrared image to determine the three-dimensional movement track information of each calibration point, and carrying out virtual reality interaction through the three-dimensional movement track information of each calibration point. By using the two infrared cameras to simultaneously collect the infrared images of the calibration object, the virtual reality interaction is carried out, so that the problem in the prior art that virtual reality interaction is difficult to realize under certain scenes and one other virtual reality technology is urgently required when the virtual reality interaction is carried out by using a three-dimensional depth camera to collect the image can be solved.

Description

A kind of virtual reality exchange method, device and system
Technical field
The present embodiments relate to technical field of virtual reality, particularly relate to a kind of virtual reality exchange method, Device and system.
Background technology
Along with the development of society, the progress of industry-by-industry is all that the raising of quality of life is made that outstanding contribution. Wherein, the appearance of virtual reality (Virtual Reality, VR) technology is greatly enriched the life of people, Virtual reality technology generates a kind of simulated environment by computer, and the image information combining collection realizes mutual The three-dimensional vision of formula and behavior, so that user is immersed in simulated environment, it is achieved people and reality environment Mutual.In virtual reality technology, the important aspect affecting interaction effect relates to image information The technology gathered.
Prior art typically gathers image information by three dimensional depth photographic head, and utilizes stereoscopy passive ranging The distance of first principles computations and the destination object such as collected thing or people, regard in order to realize interactively three-dimensional Scape, the ultimate principle of stereoscopy passive ranging is from the different same objects of viewing point, to obtain under different visual angles Perceptual image, the pixel deviations calculated between image pixel by principle of triangulation calculates destination object Range information.
Although prior art can by three dimensional depth camera collection to image information simulate three-dimensional and regard Scape and behavior, and then it is mutual to realize virtual reality.But, prior art use three dimensional depth photographic head by In reasons such as self price, technology maturity, property easy to use, cause virtual reality based on this photographic head Technology is difficult in some scenarios, in the urgent need to other virtual reality technology a kind of.
Summary of the invention
The embodiment of the present invention provides a kind of virtual reality exchange method, device and system, in order to solve existing skill Art passes through three dimensional depth camera collection image, causes virtual reality technology based on this photographic head in some field It is difficult under scape, in the urgent need to the problem of other virtual reality technology a kind of.
The embodiment of the present invention provides a kind of virtual reality exchange method, and described method includes:
At least two the first infrared images of thing are demarcated by the first infrared camera collection, and by second Infrared camera gathers at least two the second infrared images of described demarcation thing, and described demarcation thing comprises at least one Fixed point, described fixed point is used for providing infrared light;
Extract each described fixed point characteristic of correspondence information and in each institute in each described first infrared image Stating characteristic of correspondence information in the second infrared image, described characteristic information is used for showing that each described fixed point is in institute State the position in the first infrared image or the second infrared image;
Utilize each described fixed point characteristic of correspondence information and described in described first infrared image Characteristic of correspondence information in two infrared images, determines the 3 D motion trace information of each described fixed point, and leads to Crossing the 3 D motion trace information of each described fixed point, to carry out virtual reality mutual.
The embodiment of the present invention also provides for a kind of virtual reality device, and described device includes: the first infrared photography list Unit, the second infrared photography unit, extraction unit, determine unit and interactive unit, wherein:
First infrared photography unit, for by least two first of the first infrared camera collection demarcation thing Infrared image, described demarcation thing comprises at least one fixed point, and described fixed point is used for providing infrared light;
Second infrared photography unit, for gathering described demarcation thing at least two the by the second infrared camera Two infrared images;
Extraction unit, is used for extracting each described fixed point characteristic of correspondence letter in each described first infrared image Ceasing and characteristic of correspondence information in each described second infrared image, described characteristic information is used for showing each institute State fixed point position in described first infrared image or the second infrared image;
Determine unit, be used for utilizing each described fixed point characteristic of correspondence information in described first infrared image And in described second infrared image characteristic of correspondence information, determine the three-dimensional motion rail of each described fixed point Mark information;
Interactive unit, for carrying out virtual reality friendship by the 3 D motion trace information of each described fixed point Mutually.
The embodiment of the present invention also provides for a kind of virtual reality interactive system, and described system includes: virtual reality is handed over Device and demarcation thing mutually, wherein:
Described virtual reality device include the first infrared photography unit, the second infrared photography unit, extraction unit, Determine unit and interactive unit, wherein: the first infrared photography unit, for being adopted by the first infrared camera At least two the first infrared images of thing demarcated by collection;Second infrared photography unit, for infrared taking the photograph by second As head gathers at least two the second infrared images of described demarcation thing;Extraction unit, is used for extracting each described demarcation Point characteristic of correspondence information and right in each described second infrared image in each described first infrared image The characteristic information answered, described characteristic information is used for showing that each described fixed point is at described first infrared image or Position in two infrared images;Determine unit, be used for utilizing each described fixed point at described first infrared image Middle characteristic of correspondence information and in described second infrared image characteristic of correspondence information, determine each described mark The 3 D motion trace information of fixed point;Interactive unit, for by the 3 D motion trace of each described fixed point It is mutual that information carries out virtual reality.
Described demarcation thing comprises at least one fixed point, and described fixed point is used for reflecting infrared light.
A kind of virtual reality exchange method, device and the system that the embodiment of the present invention provides, infrared by first The infrared image of thing is demarcated in photographic head and the second infrared camera collection, and by the infrared image gathered Carry out the analysis of feature information extraction and characteristic information, determine the three-dimensional motion of each fixed point of this demarcation thing Trace information, thus it is mutual to carry out virtual reality, therefore solves prior art by three dimensional depth photographic head Gather image and carry out virtual reality alternately, due to self price of three dimensional depth photographic head, technology maturity, make With the impact of the reasons such as convenience, virtual reality technology based on this photographic head is caused to be difficult in some scenarios The problem realized, it is provided that a kind of new virtual reality technology.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to enforcement In example or description of the prior art, the required accompanying drawing used is briefly described, it should be apparent that, retouch below Accompanying drawing in stating is some embodiments of the present invention, for those of ordinary skill in the art, is not paying On the premise of creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
The flow chart of a kind of virtual reality exchange method that Fig. 1 provides for the embodiment of the present invention 1;
Fig. 2 is a kind of demarcation glove schematic diagram in the embodiment of the present invention 1 in actual application;
The flow chart of a kind of virtual reality exchange method that Fig. 3 provides for the embodiment of the present invention 2;
Fig. 4 is a kind of virtual reality interactive device schematic diagram in the embodiment of the present invention 2 in actual application;
Fig. 5 is the structural representation of a kind of virtual reality interactive device in the embodiment of the present invention 3;
Fig. 6 is the structural representation of a kind of virtual reality interactive system in the embodiment of the present invention 4.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the present invention Accompanying drawing in embodiment, is clearly and completely described the technical scheme in the embodiment of the present invention, it is clear that Described embodiment is a part of embodiment of the present invention rather than whole embodiments.Based in the present invention Embodiment, those of ordinary skill in the art obtained under not making creative work premise all its His embodiment, broadly falls into the scope of protection of the invention.
Below in conjunction with accompanying drawing, describe the technical scheme that each embodiment of the application provides in detail.
Embodiment 1
Embodiment 1 provides a kind of virtual reality exchange method, is taken the photograph by three dimensional depth in order to solve prior art As head gathers image, due to reasons such as self prices of three dimensional depth photographic head, cause void based on this photographic head Intend the problem that reality technology is difficult under special scenes.The idiographic flow schematic diagram of the method such as accompanying drawing 1 Shown in, comprise the steps:
Step S11: demarcated at least two the first infrared images of thing by the first infrared camera collection, with And the second infrared image of described demarcation thing equal number is gathered by the second infrared camera.
Described first infrared camera and described second infrared camera refer to the shooting of near infrared imaging Head, owing to infrared camera price for three dimensional depth photographic head is the cheapest, therefore in the market It is the most relatively low that exchange method based on infrared camera realizes cost, it addition, the wavelength of Infrared is relatively big, Frequency is relatively low, so energy loss is less when Infrared is transmitted in atmosphere, by Infrared imaging not Easily distortion.It should be noted that infrared camera is it may be that photosensitive at photographic head in actual applications Add the photographic head of the ordinary ray of infrared fileter between device and camera lens, so can reduce this further Virtual reality exchange method realize cost, particularly in order to increase the shooting effect of infrared camera, adopted Infrared fileter can be the infrared band pass filter of 850 nanometers.
Generally the first infrared camera and the second infrared camera are arranged on same setting in actual applications Standby upper, this equipment can be server, it is also possible to for mobile terminals such as mobile phone, iPad or intelligent helmets, goes back Can be the terminal such as intelligent television or computer, the mutual mode of virtual reality can be by the infrared figure that will gather Image relaying is to server, then by carrying out computing and simulating reality environment, it is also possible to by mobile phone, iPad, intelligence The terminals such as the energy helmet, intelligent television or computer carry out computing and simulating reality environment, and the embodiment of the present application is the most right This makes restriction.
Described demarcation thing comprises at least one fixed point, and described fixed point is used for providing infrared light.Demarcation thing refers to The object that first infrared camera and the second infrared camera are taken pictures simultaneously, in reality, this object can be people Or object, and the outer surface of this object at least to have part area for providing infrared light, this carries Area for infrared light is referred to as fixed point, demarcates in thing and at least to have a fixed point, particularly in reality Border application in fixed point provide infrared light mode can also have multiple, including reflection infrared light and fixed point from Body launches infrared light, and a kind of conventional mode providing infrared light is the outer surface at each fixed point demarcating thing Reflectorized material, the infrared light that reflection is launched to this demarcation thing by miscellaneous equipment are installed.
At least two the first infrared images of thing are demarcated in described first infrared camera collection, and second is infrared Demarcate at least two the second infrared images of thing described in camera collection, refer to the first infrared camera and second red Outer photographic head gathers identical demarcation thing N and M respectively and opens infrared image, and N and M is all higher than or is equal to 2.In actual applications, usual first infrared camera and the second infrared camera gather the red of equal number Outer image, namely N and M is equal.When calculating the 3 D motion trace demarcating thing, at least need two Infrared image, and the infrared image gathered within a period of time is the most, more can accurately describe demarcation thing and exist During this period of time in movement locus, but again due to gather infrared image the most time, amount of calculation is the most loaded down with trivial details, The most generally by gathering three infrared images demarcating thing within a period of time, mark can preferably be described Earnest movement locus during this period of time.
Step S12: extract each described fixed point characteristic of correspondence information in each described first infrared image, And extract each described fixed point characteristic of correspondence information in each described second infrared image.
Described characteristic information is used for showing that each described fixed point is at each described first infrared image or each described Position in two infrared images.
Owing to the first infrared camera acquires at least two infrared images of demarcation thing, and in this demarcation thing There may be many fixed points, therefore extract the spy that each described fixed point is corresponding in each described first infrared image The mode of reference breath can be, performs following operation for each fixed point: first determine that described fixed point exists Corresponding region in each described first infrared image, then uses clustering algorithm to extract in described corresponding region Described fixed point characteristic of correspondence information;Can also be for each described first infrared image, first calculate each The characteristic information of each fixed point in described first infrared image, it is then determined that each fixed point is each described first red Characteristic of correspondence information in outer image.
The trajectory predictions algorithms such as Kalman Prediction can be first passed through in actual applications, determine same fixed point Corresponding region in each described first infrared image, then in whole corresponding regions that this fixed point is corresponding The clustering algorithms such as k-means or k-mediods are used to extract this fixed point characteristic of correspondence information, it is also possible to First the clustering algorithms such as k-means or k-mediods are used to calculate each described first infrared image each described The characteristic information of each fixed point in first infrared image, then by trajectory predictions algorithms such as Kalman Predictions, Determine this fixed point characteristic of correspondence information in each described first infrared image.
Extract each described fixed point method of characteristic of correspondence information in each described second infrared image, permissible Use and extract each described fixed point side that characteristic of correspondence information is identical in each described first infrared image Method.
Step S13: utilize each described fixed point in each described first infrared image characteristic of correspondence information with And each described fixed point characteristic of correspondence information in each described second infrared image, determine each described fixed point 3 D motion trace information, and carry out virtual reality by the 3 D motion trace information of each described fixed point Alternately.
The 3 D motion trace information of each described fixed point refers to when described demarcation thing moves on three dimensions, institute State each described fixed point information of movement locus on three dimensions demarcated on thing.Such as in actual application one Plant and demarcate glove, as in figure 2 it is shown, the fixed point on these demarcation glove is respectively 5 fingers, when making this When demarcation glove move on three dimensions, the 3 D motion trace information of each described fixed point refers to this demarcation hands Overlap the information of movement locus on three dimensions respectively of the fixed point on 5 fingers.
Utilize each described fixed point characteristic of correspondence information and each described in each described first infrared image Fixed point is characteristic of correspondence information in each described second infrared image, determines three maintenance and operations of each described fixed point Dynamic trace information.As a example by above-mentioned demarcation glove, respectively by each fixed point at each first infrared image Middle characteristic of correspondence information, and this fixed point characteristic of correspondence information in each second infrared image, determine This fixed point 3 D motion trace information.
By the 3 D motion trace information of each described fixed point carry out virtual reality mutual by the way of have multiple, It can be the three-dimensional motion rail being determined described demarcation thing by the 3 D motion trace information of each described fixed point Mark information, compares the information in the 3 D motion trace information of described demarcation thing and data base, obtains Interactive instruction corresponding with the 3 D motion trace information of described demarcation thing in described data base, by described friendship It is mutual that mutual instruction carries out virtual reality;Can also be by the 3 D motion trace information of each described fixed point respectively Compare with the information in data base, obtain the three-dimensional motion rail with each described fixed point in described data base The interactive instruction that mark information is corresponding, carries out virtual reality by described interactive instruction mutual.
Use a kind of virtual reality exchange method that embodiment 1 provides, by the first infrared camera and second The infrared image of thing is demarcated in infrared camera collection, and by the infrared image gathered is carried out characteristic information Extract and the analysis of characteristic information, determine the 3 D motion trace information of each fixed point of this demarcation thing, from And it is mutual to carry out virtual reality, therefore solves prior art and carried out by three dimensional depth camera collection image Virtual reality is mutual, former due to self price of three dimensional depth photographic head, technology maturity, property easy to use etc. The impact of cause, causes virtual reality technology based on this photographic head to be difficult in some scenarios, compels to be essential The problem wanting other virtual reality technology a kind of.
Embodiment 2
Step S13 of embodiment 1 is mentioned, utilizes each described fixed point in each described first infrared image Characteristic of correspondence information and each described fixed point characteristic of correspondence information in each described second infrared image, Determine the 3 D motion trace information of each described fixed point, in fact, by each fixed point at each first infrared figure In Xiang characteristic of correspondence information and in each second infrared image characteristic of correspondence information, determine each fixed point The method of 3 D motion trace information has multiple.When described first infrared camera and described second infrared photography The camera lens of head is when being in same plane, described utilizes each described fixed point right in each described first infrared image The characteristic information answered and each described fixed point characteristic of correspondence information in each described second infrared image, really The 3 D motion trace information of fixed described demarcation thing can be, first determines that each the first infrared image is gathering Time, in each fixed point in this infrared image to the camera lens of this first infrared camera and the second infrared camera The information of the distance of heart line, then by the information of this distance of at least two, determines each described fixed point three Dimension motion track information, constitutes embodiments of the invention 2, as described in accompanying drawing 3.
Step S21: demarcated at least two the first infrared images of thing by the first infrared camera collection, with Time gathered the second infrared image of described demarcation thing equal number by the second infrared camera, described first red The camera lens of outer photographic head and described second infrared camera is in same plane.
Generally when describing the movement locus demarcating thing, in order to make full use of gathered infrared image, permissible Making described first infrared camera collection demarcate at least two the first infrared images of thing, second is infrared simultaneously The second infrared image of thing equal number, namely the first infrared camera and the is demarcated described in camera collection The two infrared cameras identical demarcation thing R of the most each collection opens infrared image, and R is more than or equal to 2.
As shown in Figure 4, generally the first infrared camera and the second infrared camera are fixed in actual applications On the same device, and make the camera lens of this first infrared camera and this second infrared camera be in One plane, by adjusting the infrared image of the direction collection demarcation thing of equipment, and the most on the device Installation infrared light emitting devices, launches Infrared by the discharger of this infrared light to demarcating thing, by marking Fixed point reflection Infrared in earnest.
Step S22: extract each described fixed point characteristic of correspondence information in each described first infrared image, And extract each described fixed point characteristic of correspondence information in each described second infrared image.
Step S22 is identical with step S12 in embodiment 1, the most no longer goes to live in the household of one's in-laws on getting married and chats.
Step S23: for the following operation of each fixed point execution:
Step S231: determine the second infrared image simultaneously gathered with each Zhang Suoshu the first infrared image;
Step S232: utilize described fixed point characteristic of correspondence information in described first infrared image, and Described fixed point is characteristic of correspondence information in described second infrared image, determine gather described first infrared During image, described fixed point to described first infrared camera and the optical center of described second infrared camera The vertical dimension information of line;
Described fixed point is to the optical center of described first infrared camera and described second infrared camera The vertical dimension information of line refers to: the optical center of fixed point to the first infrared camera and the second infrared photography The range information of the line vertical line section of the optical center of head.
In actual applications, distance between usual first infrared camera and the second infrared camera it is known that And the focal length of two photographic head identical and known time, utilize the spy that fixed point is corresponding in the first infrared image Reference ceases, and characteristic of correspondence information in the second infrared image, can be by utilizing the meter of similar triangles Calculate, show that this fixed point is to the first infrared camera and the line of the optical center of described second infrared camera Vertical dimension information;Can also be by characteristic of correspondence information in the first infrared image, and second is infrared The information of the focal length of characteristic of correspondence information and two photographic head in image, determines that two photographic head are to this demarcation The parallax of point, by the distance between parallax and the first infrared camera and the second infrared camera, determines This fixed point is to line vertical of the first infrared camera and the optical center of described second infrared camera Range information.
Step S233: determine described mark by vertical dimension information described at least two that described fixed point is corresponding Fixed point 3 D motion trace information.
Step S234: carry out virtual reality by the 3 D motion trace information of each described fixed point mutual.
Use a kind of virtual reality exchange method that embodiment 2 provides, the first infrared camera and second infrared Photographic head gathers the infrared image demarcating thing equal number simultaneously, and by arranging this first infrared camera It is in same plane with the camera lens of this second infrared camera, such that it is able to infrared to first by each fixed point The vertical dimension information of the line of the optical center of photographic head and the second infrared camera, determines and demarcates thing The 3 D motion trace information of each fixed point, so that this virtual reality method more easily realizes.
Embodiment 3
Embodiment 3 provides a kind of virtual reality interactive device, is taken the photograph by three dimensional depth in order to solve prior art As head gathers image, due to reasons such as self prices of three dimensional depth photographic head, cause void based on this photographic head Intend the problem that reality technology is difficult under special scenes.The concrete structure schematic diagram of this device 500 such as figure Shown in 5, including following unit: first infrared photography unit the 501, second infrared photography unit 502, extraction Unit 503, determine unit 504 and interactive unit 505, wherein:
First infrared photography unit 501, for by least two of the first infrared camera collection demarcation thing First infrared image, described demarcation thing comprises at least one fixed point, and described fixed point is used for providing infrared light;
Second infrared photography unit 502, for gathering described demarcation thing at least two by the second infrared camera Open the second infrared image;
Extraction unit 503, for extracting the spy that each described fixed point is corresponding in each described first infrared image Reference ceases and characteristic of correspondence information in each described second infrared image, and described characteristic information is used for showing Each described fixed point position in described first infrared image or the second infrared image;
Determine unit 504, be used for utilizing each described fixed point characteristic of correspondence in described first infrared image Information and in described second infrared image characteristic of correspondence information, determine three maintenance and operations of each described fixed point Dynamic trace information;
Interactive unit 505, for carrying out virtual reality by the 3 D motion trace information of each described fixed point Alternately.
In actual applications, described extraction unit 503 can also include the first extraction subelement 5031 and Two extract subelement 5032, wherein:
Described first extracts subelement 5031, for determining that described fixed point is in each institute for each fixed point State the corresponding region in the first infrared image or each described second infrared image;
Described second extracts subelement 5032, described for using clustering algorithm to extract in described corresponding region Fixed point characteristic of correspondence information.
Particularly, described interactive unit 505 can also include first interactive unit the 5051, second interactive unit 5052 and the 3rd interactive unit 5053, wherein:
First interactive unit 5051, described for being determined by the 3 D motion trace information of each described fixed point Demarcate the 3 D motion trace information of thing;
Second interactive unit 5052, for by the 3 D motion trace information of described demarcation thing and data base Information is compared, and obtains friendship corresponding with the 3 D motion trace information of described demarcation thing in described data base Instruction mutually;
3rd interactive unit 5053, mutual for carrying out virtual reality by described interactive instruction.
Use a kind of virtual reality interactive device that embodiment 3 provides, the first infrared photography unit and second red Outer image unit, is gathered at least two infrared images of identical demarcation thing, then extracts by infrared camera Unit extracts each fixed point characteristic of correspondence information in each infrared image, determines that unit utilizes each fixed point corresponding Characteristic information determine the 3 D motion trace information of each fixed point, interactive unit three-dimensional based on each fixed point It is mutual that motion track information carries out virtual reality.Therefore solve prior art to be adopted by three dimensional depth photographic head It is mutual that collection image carries out virtual reality, due to the impact of the reasons such as self price of three dimensional depth photographic head, causes The problem that virtual reality technology based on this photographic head is difficult under special scenes.
In addition, it is necessary to explanation is to pass through hardware processor (hardware in the embodiment of the present invention Processor) above-mentioned related function module is realized.
Embodiment 4
Embodiment 4 provides a kind of virtual reality interactive system, is taken the photograph by three dimensional depth in order to solve prior art As head gathers image, due to reasons such as self prices of three dimensional depth photographic head, cause void based on this photographic head Intend the problem that reality technology is difficult under special scenes.The concrete knot of this virtual reality interactive system 600 Structure schematic diagram as shown in Figure 6, including: virtual reality interactive device 601 with demarcate thing 602, wherein:
Described virtual reality device 601 includes: the first infrared photography unit, the second infrared photography unit, carry Take unit, determine unit and interactive unit, wherein: the first infrared photography unit, for infrared by first Camera collection demarcates at least two the first infrared images of thing, and described demarcation thing comprises at least one and demarcates Point, described fixed point is used for providing infrared light;Second infrared photography unit, for by the second infrared photography Head gathers at least two the second infrared images of described demarcation thing;Extraction unit, is used for extracting each described fixed point Characteristic of correspondence information and corresponding in each described second infrared image in each described first infrared image Characteristic information, described characteristic information is used for showing that each described fixed point is at described first infrared image or second Position in infrared image;Determine unit, be used for utilizing each described fixed point in described first infrared image Characteristic of correspondence information and in described second infrared image characteristic of correspondence information, determine each described demarcation The 3 D motion trace information of point;Interactive unit, for believing by the 3 D motion trace of each described fixed point It is mutual that breath carries out virtual reality.
Described virtual reality is demarcated 602 and is comprised at least one fixed point, and described fixed point is used for reflecting infrared light.
A kind of virtual reality interactive system in actual applications, including the mutual helmet of virtual reality and demarcation hands Set, has double infrared camera in the mutual helmet of virtual reality, for gathering the infrared image demarcating glove, logical Often can also on the mutual helmet of virtual reality installation infrared light emitting devices, demarcate glove 5 fingers on Having the material that can reflect infrared light, the double infrared cameras on the mutual helmet of virtual reality are gathering infrared figure After Xiang, the server that the infrared image of collection can pass to far-end processes, it is also possible to virtual at this On the mutual helmet of reality, installation process equipment processes.
Using a kind of virtual reality interactive system that embodiment 4 provides, in this system, virtual reality device passes through Infrared camera in first infrared photography unit and the second infrared photography unit, gathers the infrared figure demarcating thing Picture, and the infrared image gathered is carried out a series of process thus to carry out virtual reality mutual.Therefore solve It is mutual, owing to three dimensional depth is taken the photograph that prior art carries out virtual reality by three dimensional depth camera collection image As the impact of the reasons such as head self price, cause virtual reality technology based on this photographic head under special scenes The problem being difficult to.
In addition, it is necessary to explanation is to pass through hardware processor (hardware in the embodiment of the present invention Processor) above-mentioned related function module is realized.
Device embodiment described above is only schematically, wherein said illustrates as separating component Unit can be or may not be physically separate, and the parts shown as unit can be or also Can not be physical location, i.e. may be located at a place, or can also be distributed on multiple NE. Some or all of module therein can be selected according to the actual needs to realize the mesh of the present embodiment scheme 's.Those of ordinary skill in the art, in the case of not paying performing creative labour, are i.e. appreciated that and implement.
Through the above description of the embodiments, those skilled in the art is it can be understood that arrive each enforcement Mode can add the mode of required general hardware platform by software and realize, naturally it is also possible to pass through hardware. Based on such understanding, the part that prior art is contributed by technique scheme the most in other words is permissible Embodying with the form of software product, this computer software product can be stored in computer-readable storage medium In matter, such as ROM/RAM, magnetic disc, CD etc., including some signals with so that a computer equipment (can be personal computer, server, or the network equipment etc.) performs each embodiment or embodiment The method described in some part.
Last it is noted that above example is only in order to illustrate technical scheme, rather than it is limited System;Although the present invention being described in detail with reference to previous embodiment, those of ordinary skill in the art It is understood that the technical scheme described in foregoing embodiments still can be modified by it, or to it Middle part technical characteristic carries out equivalent;And these amendments or replacement, do not make appropriate technical solution Essence departs from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (10)

1. a virtual reality exchange method, it is characterised in that including:
At least two the first infrared images of thing are demarcated by the first infrared camera collection, and by second Infrared camera gathers at least two the second infrared images of described demarcation thing, and described demarcation thing comprises at least one Fixed point, described fixed point is used for providing infrared light;
Extract each described fixed point characteristic of correspondence information and in each institute in each described first infrared image Stating characteristic of correspondence information in the second infrared image, described characteristic information is used for showing that each described fixed point is in institute State the position in the first infrared image or the second infrared image;
Utilize each described fixed point characteristic of correspondence information and described in described first infrared image Characteristic of correspondence information in two infrared images, determines the 3 D motion trace information of each described fixed point, and leads to Crossing the 3 D motion trace information of each described fixed point, to carry out virtual reality mutual.
Method the most according to claim 1, it is characterised in that each described fixed point of described extraction exists Characteristic of correspondence information and corresponding in each described second infrared image in each described first infrared image Characteristic information particularly as follows:
For the following operation of each fixed point execution:
Determine described fixed point correspondence in each described first infrared image or each described second infrared image Region;
Clustering algorithm is used to extract described fixed point characteristic of correspondence information in described corresponding region.
Method the most according to claim 1, it is characterised in that by the first infrared camera collection Demarcate at least two the first infrared images of thing, and gather described demarcation thing extremely by the second infrared camera Few two the second infrared images particularly as follows: demarcate at least two first of thing by the first infrared camera collection Infrared image, gathers the second infrared figure of described demarcation thing equal number simultaneously by the second infrared camera Picture, and the camera lens of described first infrared camera and described second infrared camera is in same plane;
Then, described utilize each described fixed point in each described first infrared image characteristic of correspondence information and Each described fixed point is characteristic of correspondence information in each described second infrared image, determines the three of described demarcation thing Dimension motion track information particularly as follows:
For the following operation of each fixed point execution:
Determine the second infrared image simultaneously gathered with each Zhang Suoshu the first infrared image;
Utilize described fixed point characteristic of correspondence information in described first infrared image, and described fixed point Characteristic of correspondence information in described second infrared image, determines when gathering described first infrared image, institute State the fixed point line to described first infrared camera and the optical center of described second infrared camera Vertical dimension information;
Determine that described fixed point is three-dimensional by vertical dimension information described at least two that described fixed point is corresponding Motion track information.
Method the most according to claim 1, it is characterised in that described by each described fixed point 3 D motion trace information carry out virtual reality mutual particularly as follows:
The 3 D motion trace of described demarcation thing is determined by the 3 D motion trace information of each described fixed point Information;
Information in the 3 D motion trace information of described demarcation thing and data base is compared, obtains described Interactive instruction corresponding with the 3 D motion trace information of described demarcation thing in data base;
Virtual reality is carried out mutual by described interactive instruction.
Method the most according to claim 1, it is characterised in that described first infrared camera and institute State the second infrared camera and be specially the photographic head between sensor devices and camera lens with infrared fileter.
Method the most according to claim 1, it is characterised in that described method also includes: to described Demarcate thing and launch infrared light, demarcate thing by the fixed point reflection infrared light demarcating on thing.
7. a virtual reality device, it is characterised in that described device includes:
First infrared photography unit, the second infrared photography unit, extraction unit, determine unit and interactive unit, Wherein:
First infrared photography unit, for by least two first of the first infrared camera collection demarcation thing Infrared image, described demarcation thing comprises at least one fixed point, and described fixed point is used for providing infrared light;
Second infrared photography unit, for gathering described demarcation thing at least two the by the second infrared camera Two infrared images;
Extraction unit, is used for extracting each described fixed point characteristic of correspondence letter in each described first infrared image Ceasing and characteristic of correspondence information in each described second infrared image, described characteristic information is used for showing each institute State fixed point position in described first infrared image or the second infrared image;
Determine unit, be used for utilizing each described fixed point characteristic of correspondence information in described first infrared image And in described second infrared image characteristic of correspondence information, determine the three-dimensional motion rail of each described fixed point Mark information;
Interactive unit, for carrying out virtual reality friendship by the 3 D motion trace information of each described fixed point Mutually.
Device the most according to claim 7, it is characterised in that described extraction unit includes that first carries Take subelement and second and extract subelement, wherein:
Described first extracts subelement, for determining that described fixed point is each described for each fixed point Corresponding region in one infrared image or each described second infrared image;
Described second extracts subelement, for using clustering algorithm to extract described demarcation in described corresponding region Point characteristic of correspondence information.
Device the most according to claim 7, it is characterised in that described interactive unit includes the first friendship Unit, the second interactive unit and the 3rd interactive unit mutually, wherein:
First interactive unit, for determining described demarcation by the 3 D motion trace information of each described fixed point The 3 D motion trace information of thing;
Second interactive unit, for by the information in the 3 D motion trace information of described demarcation thing and data base Compare, obtain alternately refer to corresponding with the 3 D motion trace information of described demarcation thing in described data base Order;
3rd interactive unit, mutual for carrying out virtual reality by described interactive instruction.
10. a virtual reality interactive system, it is characterised in that described system includes: virtual reality is mutual Device and demarcation thing, wherein:
Described virtual reality device include the first infrared photography unit, the second infrared photography unit, extraction unit, Determine unit and interactive unit, wherein: the first infrared photography unit, for being adopted by the first infrared camera At least two the first infrared images of thing demarcated by collection;Second infrared photography unit, for infrared taking the photograph by second As head gathers at least two the second infrared images of described demarcation thing;Extraction unit, is used for extracting each described demarcation Point characteristic of correspondence information and right in each described second infrared image in each described first infrared image The characteristic information answered, described characteristic information is used for showing that each described fixed point is at described first infrared image or Position in two infrared images;Determine unit, be used for utilizing each described fixed point at described first infrared image Middle characteristic of correspondence information and in described second infrared image characteristic of correspondence information, determine each described mark The 3 D motion trace information of fixed point;Interactive unit, for by the 3 D motion trace of each described fixed point It is mutual that information carries out virtual reality;
Described demarcation thing comprises at least one fixed point, and described fixed point is used for reflecting infrared light.
CN201510870209.8A 2015-12-01 2015-12-01 Virtual reality interaction method, device and system Pending CN105892638A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510870209.8A CN105892638A (en) 2015-12-01 2015-12-01 Virtual reality interaction method, device and system
PCT/CN2016/096983 WO2017092432A1 (en) 2015-12-01 2016-08-26 Method, device, and system for virtual reality interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510870209.8A CN105892638A (en) 2015-12-01 2015-12-01 Virtual reality interaction method, device and system

Publications (1)

Publication Number Publication Date
CN105892638A true CN105892638A (en) 2016-08-24

Family

ID=57002403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510870209.8A Pending CN105892638A (en) 2015-12-01 2015-12-01 Virtual reality interaction method, device and system

Country Status (2)

Country Link
CN (1) CN105892638A (en)
WO (1) WO2017092432A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092432A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Method, device, and system for virtual reality interaction
CN109313483A (en) * 2017-01-22 2019-02-05 广东虚拟现实科技有限公司 A kind of device interacted with reality environment
CN110442235A (en) * 2019-07-16 2019-11-12 广东虚拟现实科技有限公司 Positioning and tracing method, device, terminal device and computer-readable storage medium
CN110999268A (en) * 2017-08-07 2020-04-10 苹果公司 Electronic device with vision system components held by self-aligning bracket assembly
CN111736708A (en) * 2020-08-25 2020-10-02 歌尔光学科技有限公司 Head-mounted device, picture display system and method thereof, detection system and method thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509173A (en) * 2018-06-07 2018-09-07 北京德火科技有限责任公司 Image shows system and method, storage medium, processor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013061937A (en) * 2011-09-12 2013-04-04 Palo Alto Research Center Inc Combined stereo camera and stereo display interaction
CN103135754A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Interactive device and method for interaction achievement with interactive device
US20140098018A1 (en) * 2012-10-04 2014-04-10 Microsoft Corporation Wearable sensor for tracking articulated body-parts
CN104298345A (en) * 2014-07-28 2015-01-21 浙江工业大学 Control method for man-machine interaction system
US20150078617A1 (en) * 2013-09-13 2015-03-19 Research & Business Foundation Sungkyunkwan University Mobile terminal and method for generating control command using marker attached to finger

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9679414B2 (en) * 2013-03-01 2017-06-13 Apple Inc. Federated mobile device positioning
CN105068649A (en) * 2015-08-12 2015-11-18 深圳市埃微信息技术有限公司 Binocular gesture recognition device and method based on virtual reality helmet
CN105892633A (en) * 2015-11-18 2016-08-24 乐视致新电子科技(天津)有限公司 Gesture identification method and virtual reality display output device
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013061937A (en) * 2011-09-12 2013-04-04 Palo Alto Research Center Inc Combined stereo camera and stereo display interaction
CN103135754A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Interactive device and method for interaction achievement with interactive device
US20140098018A1 (en) * 2012-10-04 2014-04-10 Microsoft Corporation Wearable sensor for tracking articulated body-parts
US20150078617A1 (en) * 2013-09-13 2015-03-19 Research & Business Foundation Sungkyunkwan University Mobile terminal and method for generating control command using marker attached to finger
CN104298345A (en) * 2014-07-28 2015-01-21 浙江工业大学 Control method for man-machine interaction system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092432A1 (en) * 2015-12-01 2017-06-08 乐视控股(北京)有限公司 Method, device, and system for virtual reality interaction
CN109313483A (en) * 2017-01-22 2019-02-05 广东虚拟现实科技有限公司 A kind of device interacted with reality environment
CN110999268A (en) * 2017-08-07 2020-04-10 苹果公司 Electronic device with vision system components held by self-aligning bracket assembly
CN110999268B (en) * 2017-08-07 2021-12-28 苹果公司 Electronic device with vision system components held by self-aligning bracket assembly
CN110442235A (en) * 2019-07-16 2019-11-12 广东虚拟现实科技有限公司 Positioning and tracing method, device, terminal device and computer-readable storage medium
CN110442235B (en) * 2019-07-16 2023-05-23 广东虚拟现实科技有限公司 Positioning tracking method, device, terminal equipment and computer readable storage medium
CN111736708A (en) * 2020-08-25 2020-10-02 歌尔光学科技有限公司 Head-mounted device, picture display system and method thereof, detection system and method thereof
CN111736708B (en) * 2020-08-25 2020-11-20 歌尔光学科技有限公司 Head-mounted device, picture display system and method thereof, detection system and method thereof

Also Published As

Publication number Publication date
WO2017092432A1 (en) 2017-06-08

Similar Documents

Publication Publication Date Title
CN105892638A (en) Virtual reality interaction method, device and system
CN109064545B (en) Method and device for data acquisition and model generation of house
US20210201590A1 (en) Systems, devices, and methods for augmented reality
CN107836012B (en) Projection image generation method and device, and mapping method between image pixel and depth value
CN105898216B (en) A kind of number method of counting carried out using unmanned plane
CN109218619A (en) Image acquiring method, device and system
CN107408303A (en) System and method for Object tracking
CN106791613B (en) A kind of intelligent monitor system combined based on 3DGIS and video
CN106228119A (en) A kind of expression catches and Automatic Generation of Computer Animation system and method
CN106165386A (en) For photo upload and the automatic technology of selection
JP2021051736A (en) Vehicle travel route planning method, apparatus, system, medium and device
US20210112194A1 (en) Method and device for taking group photo
CN112449152B (en) Method, system and equipment for synchronizing multi-channel video
Speth et al. Deep learning with RGB and thermal images onboard a drone for monitoring operations
CN112465704B (en) Global-local self-adaptive optimized panoramic light field splicing method
Khalifa et al. A novel multi-view pedestrian detection database for collaborative intelligent transportation systems
CN109656319B (en) Method and equipment for presenting ground action auxiliary information
CN114445780A (en) Detection method and device for bare soil covering, and training method and device for recognition model
CN106060523A (en) Methods for collecting and displaying panoramic stereo images, and corresponding devices
CN112313596A (en) Inspection method, equipment and storage medium based on aircraft
JP6849256B1 (en) 3D model construction system and 3D model construction method
CN109902681A (en) User group's relationship determines method, apparatus, equipment and storage medium
CN110880161B (en) Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras
CN109242772A (en) Airfield pavement surface image joining method based on the acquisition of intelligent platform area array cameras
EP4198772A1 (en) Method and device for making music recommendation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160824

WD01 Invention patent application deemed withdrawn after publication