CN106713884A - Immersive interactive projection system - Google Patents

Immersive interactive projection system Download PDF

Info

Publication number
CN106713884A
CN106713884A CN201710072511.8A CN201710072511A CN106713884A CN 106713884 A CN106713884 A CN 106713884A CN 201710072511 A CN201710072511 A CN 201710072511A CN 106713884 A CN106713884 A CN 106713884A
Authority
CN
China
Prior art keywords
projection
module
interactive
source input
lens group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710072511.8A
Other languages
Chinese (zh)
Inventor
熊智宇
凌云
邹立新
罗勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Sentinel Technology Co Ltd
Original Assignee
Nanchang Sentinel Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Sentinel Technology Co Ltd filed Critical Nanchang Sentinel Technology Co Ltd
Priority to CN201710072511.8A priority Critical patent/CN106713884A/en
Publication of CN106713884A publication Critical patent/CN106713884A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3173Constructional details thereof wherein the projection device is specially adapted for enhanced portability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an immersive interactive projection system. The immersive interactive projection system comprises a cuboid projector body, wherein four end faces and the top surface of the projector body are provided with projection lens groups; each of the projection lens groups is connected to a master controller; the master controller is connected to a projection processing unit and a data communication unit; the data communication unit is connected to a projection source input end; and the projection processing unit comprises a projection output driving module, a projection output release module, a projection segmentation module and an interactive motion capture module. The immersive interactive projection system provided by the invention can segment a projection image inputted by a projection source client into multiple blocks, then put the segmented image video projection on multiple surfaces all around to form a surround projection, and can capture gestures of viewers, so that the system is convenient for the putting of the projection and the interaction with people, and can be used in interactive games.

Description

A kind of immersion interactive projection system
Technical field
The present invention relates to a kind of immersion interactive projection system.
Background technology
At present, projector equipment is generally used in existing shadow casting technique directly to deliver onto a plane projection, but this The projection structure and method of sample can be only done a dispensing for plane, it is impossible to again by the throwing of multiaspect after accomplishing to split projected image Put projection, it is impossible to realize the projection of circulating type, the effect increase in demand with people to projecting, the projection of one plane type cannot meet The need for people.
The content of the invention
A kind of above-mentioned deficiency in order to solve prior art of the invention, it is proposed that immersion interactive projection system.
In order to solve the above-mentioned technical problem, the present invention uses following technical scheme:A kind of immersion interactive projection system, bag The projector body of rectangular-shape is included, projection lens group, the projection are designed with four end faces and top surface of the projector body Lens group includes main lens and secondary mirror head, and projection lens group is connected to a master controller, and the master controller is connected with throwing Shadow processing unit and data communication unit, the data communication unit include data transfer serial ports, GPRS communication module and wireless Routing module, the data communication unit is connected to projection source input, and described projection process unit includes projection output driving Module, the outer amplification module of projection output, projection localization module and interaction motion capture module, the projection localization module is by the throwing The projected image of eikonogen input is divided into multiple perspective planes, and then is delivered by the projection lens group on end face, and described is mutual Dynamic motion capture module is by catching the projected image that the action control projection source input of viewing person is exported.
Preferably, described projection source input includes mobile phone application APP and computer application.
Wherein, in projection localization it is the projection localization method that uses, comprises the following steps:
1) video acquisition system of projection localization module, the sequence of acquired projections video are started;
2) after the completion of gathering, initial background is the average value of preceding N two field pictures;
3) the structure similar diagram of current frame image and background image is calculated;
4) background modeling is carried out for structure similar background;
5) it is last to carry out foreground segmentation by multi-modal Fusion Features again, and then complete projection localization.
Above-mentioned structure similar background modeling moving target detecting method, its feature be in visual field object structures to light There is robustness very high according to change, background model is updated using structure similar diagram and environmental change coefficient dynamical feedback, herein On the basis of moving object is detected from background using multi-modal Fusion Features strategy.
In step 3) concrete operation step of structure similar diagram is as follows:
(1) calculate the gray level image Y2 luminance distortion Sm (Y1, Y2) for gray level image Y1 and contrast distortion Sv (Y1, Y2):
Wherein μ 1 and μ 2 are respectively the regional average values of Y1 and Y2, and σ 1 and σ 2 is respectively the Local Deviation of Y1 and Y2, and σ 1,2 is The region covariance of Y1 and Y2, c1 and c2 is constant.These matrixes can be obtained according to following methods:
Wherein gaus represents the Gaussian filter that window size is w,Represent convolution algorithm.
(2) structure similar diagram SSIM (Y1, Y2) of Y2 and Y1 is obtained:
SSIM(Y1,Y2)=Sm(Y1,Y2)·Sv(Y1,Y2)
(3) be identification capability of the raising system to structure distortion, according to step 3) obtained by structure similar diagram, present frame Yt Structural VAR ssmt (x, y) with current background YBt is defined as:
ssmt(x, y)=(SSIM (Yt,YBt))γ
Wherein γ is constant.
The step 4) structure similar background modeling concrete operation step it is as follows:
(1) moving region feedback factor:
dt=(1- α) dt-1+α·(1-ssmt)
Wherein dt represents current kinetic region feedback factor, and dt-1 is last moment moving region feedback factor, and α represents Habit rate.
(2) present Fuzzy feedback factor β t are expressed as:
(3) according to blurred background more new principle, structure similar background is modeled as:
Bt(x, y)=(1- βt·α)Bt-1(x,y)+βt·α·It(x,y)
Wherein It (x, y) represents present frame, and Bt (x, y) represents current background, and Bt-1 (x, y) is last moment background.
Step 5) multi-modal Fusion Features foreground segmentation concrete operation step it is as follows:
(1) present frame It (x, y) and current background Bt (x, y) are transformed into YUV color spaces from rgb color space, are counted Calculate change of present frame brightness Yt (x, y) relative to current background YBt (x, y)
lft(x, y)=| Yt(x,y)-YBt(x,y)|
Calculate change of present frame colourity Ut, Vt relative to current background colourity UBt, VBt
cft(x, y)=| Ut(x,y)-UBt(x,y)|+|Vt(x,y)-VBt(x,y)|
Calculate change of the present frame relative to the structure of current background
tft(x, y)=ssmt(x,y)
(2) brightness Prospects For Changes template mlt (x, y),
Wherein Tlft (x, y) estimates that adjusting thresholds parameter lcs is a constant for present intensity change threshold.
Wherein α T are threshold value turnover rate.
The solution procedure of colourity Prospects For Changes template mct (x, y) is identical with brightness Prospects For Changes template.
(3) structure change foreground template mtt (x, y) is
μ tft (x, y) is the average value of tft (x, y).
(4) multi-modal foreground template fusion
cmaskt(x, y)=mtt(x,y)&(mlt(x,y)|mct(x,y))
(5) foreground template post processing
It is the pixel in w in the pixel surrounding window scope that cmaskt logics are 1, if its brightness change and color change Threshold value is all higher than, then the pixel is reclassified as foreground pixel, finally gives foreground template maskt (x, y).
Operation principle is as follows:
Structure similar diagram robust can effectively reduce the hole that simple texture difference is formed in illumination variation.Using structure phase Like the difference condition obtained between current frame image and background model that figure can be stablized, its difference value is [0,1] interval range Interior serial number, this causes to carry out fuzzy feedback background modeling using structure similar diagram feasible.According to this principle, Wo Menshe Context update model is counted:
Bt(x, y)=(1- βt·α)Bt-1(x,y)+βt·α·It(x,y)
Wherein, It (x, y) represents present frame, and Bt (x, y) represents current background, and Bt-1 (x, y) is last moment background, α One level learning rate is represented, β t represent two level learning rate.α is determined by factors such as speed of moving body in video frame rate, visual field, led to It is standing to be set to a fixed empirical value.β t are expressed as:
Wherein, dt(x, y)=(1- α) dt-1(x,y)+α·(1-ssmt(x,y))
Ssmt (x, y) represents the structure similar diagram of present frame and current background, and its value falls in [0,1] interval, ssmt (x, y) Value it is bigger represent present frame it is higher with background model similarity in (x, y) position, i.e., the position for prospect probability it is lower, carry on the back Scape renewal speed should be enhanced.Dt (x, y) represents current kinetic region feedback factor, and its value falls in [0,1] interval, dt (x, y) Value it is bigger, then it represents that the presence of correspondence position (x, y) fluctuation or new background objects, now, the context update speed of the position Degree should be accelerated.Dt-1 (x, y) is last moment moving region feedback factor.
Above-mentioned projection localization method by introducing fuzzy feedback background modeling method, using present frame and background model Structural VAR, constructs dual learning rate and background model is updated, and effectively inhibits influence of the prospect to background model, Solve the problems, such as existing background modeling method presence is rapidly introduced into foreground features in context process is updated.Fusion brightness, Color, textural characteristics are split to prospect, efficiently solve shade present in other method and are mistaken for prospect, camouflage erroneous judgement The problems such as background, prospect cavity.The method of the present invention is easy, flexible, easily realization.
Compared with prior art, the projected image that projection source client is input into can be divided into polylith by the present invention, and be led to Cross and deliver on multiple faces of surrounding the image/video projection after segmentation, form the projection of circulating type, and can catch The gesture motion of viewing person is caught, projection is facilitated and is delivered and the interaction between people, in can operate with interactive game.
Brief description of the drawings
Fig. 1 is structural representation of the invention.
Specific embodiment
Invention is described in detail with reference to the accompanying drawings and examples.
As shown in figure 1, immersion interactive projection system proposed by the present invention, including rectangular-shape projector body, it is described Projection lens group is designed with four end faces and top surface of projector body, the projection lens group includes main lens and secondary mirror head, And projection lens group is connected to a master controller, the master controller is connected with projection process unit and data communication unit, The data communication unit includes data transfer serial ports, GPRS communication module and wireless routing module, and the data communication unit connects Projection source input is connected to, described projection process unit includes projection output driving module, the outer amplification module of projection output, projection Be divided into for the projected image of the projection source input by segmentation module and interactive motion capture module, the projection localization module Multiple perspective planes, and then delivered by the projection lens group on end face, described interactive motion capture module is by catching viewing The projected image of the action control projection source input output of person.
Preferably, described projection source input includes mobile phone application APP and computer application.
Wherein, in projection localization it is the projection localization method that uses, comprises the following steps:
1) video acquisition system of projection localization module, the sequence of acquired projections video are started;
2) after the completion of gathering, initial background is the average value of preceding N two field pictures;
3) the structure similar diagram of current frame image and background image is calculated;
4) background modeling is carried out for structure similar background;
5) it is last to carry out foreground segmentation by multi-modal Fusion Features again, and then complete projection localization.
During the operation of reality:
(1) calculate the gray level image Y2 luminance distortion Sm (Y1, Y2) for gray level image Y1 and contrast distortion Sv (Y1, Y2):
Wherein μ 1 and μ 2 are respectively the regional average values of Y1 and Y2, and σ 1 and σ 2 is respectively the Local Deviation of Y1 and Y2, and σ 1,2 is The region covariance of Y1 and Y2.Can be tried to achieve according to below equation:
(2) structure similar diagram SSIM (Y1, Y2) of Y2 and Y1 is obtained:
SSIM(Y1,Y2)=Sm(Y1,Y2)·Sv(Y1,Y2)
(3) be identification capability of the raising system to structure distortion, according to step 3) obtained by structure similar diagram, current Yt and Structural VAR ssmt (x, y) of current background YBt is defined as:
ssmt(x, y)=(SSIM (Yt,YBt))2
And then obtain structure chart.
The step 4) structure similar background modeling concrete operation step it is as follows:
(1) moving region feedback factor:
dt=0.99 × dt-1+0.01×(1-ssmt)
Wherein dt represents current kinetic region feedback factor, and dt-1 is last moment moving region feedback factor.
(2) present Fuzzy feedback factor β t are expressed as:
(3) according to blurred background more new principle, structure similar background is modeled as:
Bt(x, y)=(1-0.01 × βt)Bt-1(x,y)+0.01×βt·It(x,y)
Wherein It (x, y) represents present frame, and Bt (x, y) represents current background, and Bt-1 (x, y) is last moment background.
According to the background model that fuzzy feedback background update method of the invention is obtained.
Step 5) multi-modal Fusion Features foreground segmentation concrete operation step it is as follows:
Present frame It (x, y) and current background Bt (x, y) are transformed into YUV color spaces from rgb color space.
The computing formula that rgb color space is transformed into YUV color spaces is:
Y=0.299R+0.587G+0.114B
U=-0.147R-0.289G+0.436B
V=0.615R-0.515G-0.100B
Wherein, R represents the red channel in rgb color space, and G represents the green channel in rgb color space, and R is represented Blue channel in rgb color space, Y represents the luminance component in YUV color spaces, and U and V represents YUV color spaces respectively In chromatic component.
Calculate change of present frame brightness Yt (x, y) relative to current background YBt (x, y):
lft(x, y)=| Yt(x,y)-YBt(x,y)|
Calculate change of present frame colourity Ut, Vt relative to current background colourity UBt, VBt:
cft(x, y)=| Ut(x,y)-UBt(x,y)|+|Vt(x,y)-VBt(x,y)|
Calculate change of the present frame relative to the structure of current background:
tft(x, y)=ssmt(x,y)
Three of the above feature.
(2) brightness Prospects For Changes template mlt (x, y),
Wherein Tlft (x, y) estimates for present intensity change threshold.
The solution procedure of colourity Prospects For Changes template mct (x, y) is similar with brightness Prospects For Changes template.
(3) structure change foreground template mtt (x, y) is
μ tft are the average value of tft (x, y).
(4) multi-modal foreground template fusion
cmaskt(x, y)=mtt(x,y)&(mlt(x,y)|mct(x,y))
& represents and computing, | represent or computing.
(5) foreground template post processing
When cmaskt (x, y) is 1, the pixel pair with its distance range within 11, if brightness change and color change Change is all higher than threshold value, then the pixel is reclassified as foreground pixel, finally gives foreground template maskt (x, y).
Prospect result, as can be seen from the figure the present invention can be very good process shade and camouflage.
In the basic ideas of the finger gesture hand split plot design of Skin Color Information be to extract its sport foreground mesh to interactive object first Mark, then the colour of skin according to people is partitioned into finger gesture hand in the good Clustering features of dependent color spaces.
During using referring to that gesture carries out man-machine interaction, refer to that gesture arm stretches naturally, and lifted from bottom to top with given pace, in sky It is middle to pause a moment, interesting target is pointed to, this process can be divided into three phases:Before first stage is to refer to that gesture occurs, user's arm In the state that naturally droops, finger tip point need not be now detected;Second stage for user finger to a certain target, now refer to gesture hand Can be considered Moving Objects;Phase III is just pointing to a certain target for user, is approximately considered finger gesture hand and remains static.
Based on the upper cut zone for saving and referring to gesture hand, on this basis, the finger tip point of gesture is referred to according to geometric properties information extraction. Square is influenceed [63] as a kind of characteristics of image of high enrichment, size not by image, position and direction, therefore, it is sharp first Determined to refer to the center-of-mass coordinate in gesture hand region with image Moment Methods:
In formula, Mpq is referred to as (p+q) rank square, and parameter (p+q) is referred to as the rank of square, and f (i, j) splits image-region to refer to gesture hand.
Work as p=1, q=0 and p=0, during q=1,
ClaimIt is the center-of-mass coordinate of image.
Using above-mentioned center-of-mass coordinate, calculate apart from the farthest profile point of center-of-mass coordinate Euclidean distance, and it is special based on region contour Reference ceases, and calculates finger tip point coordinates:
Wherein,
In formula, c is constant.
As σ 0<θ<During σ 1, the point is regarded to refer to the finger tip point of gesture hand.
When user refers to that gesture is acted, its finger tip point is kept in motion.In order to prevent finger tip point detection lose or The situation of flase drop, is tracked/predicted based on Kalman filter [64-66] to calculated finger tip point.
Refer to gesture identification it is critical only that the determination interaction stage, and according to gesture motion characteristic is referred to, real-time detection finger tip point is certain Accumulative interframe movement Mean Speed Vfinger in direction scope, when user is when interesting target is pointed to, refers to that gesture hand has and stops , it is determined that now user is the interaction phase III, judge to refer to gesture interbehavior:
In formula, Tt is time-consuming for average interframe, and Ptfinger=[Xfinger, Yfinger, Zfinger] is the three of finger tip point Dimension coordinate.
When the movement rate of finger tip point meets following condition, then judge that user is just pointing to a certain target.
In formula, Vp refers to the mutual threshold value of power-relation for differentiation, and Np is the calculating frame number of Mean Speed.
For meet above formula (2-34) finger power-relation mutually, show that user is just pointing to a certain target, based on fingertip characteristic point with And refer to the geometrical constraint of gesture arm shoulder feature point, with reference to gesture hand main shaft is referred to, calculate it and refer to gesture orientation arm vector:
In formula, Tp=[Xp, Yp, 0] is its extraterrestrial target interaction point, and c1 is zeroing constant.
By the acquisition of prospect, the limbs of people are obtained using image segmentation and adaptive flesh colour detection technique, so as to judge The body language of people.
Above-described embodiment only expresses several embodiments of the invention, and its description is more specific and detailed, but can not Therefore understands that being the limitation to the scope of the claims of the present invention.It should be pointed out that for the person of ordinary skill of the art, On the premise of not departing from present inventive concept, various modifications and improvements can be made, these belong to protection scope of the present invention. Therefore, patent of the present invention and protection domain should be defined by appended claims.

Claims (2)

1. a kind of immersion interactive projection system, it is characterised in that:Projector body including rectangular-shape, the projector body Projection lens group is designed with four end faces and top surface, the projection lens group includes main lens and secondary mirror head, and projection lens Group is connected to a master controller, and the master controller is connected with projection process unit and data communication unit, and the data are led to News unit includes data transfer serial ports, GPRS communication module and wireless routing module, and the data communication unit is connected to projection source Input, described projection process unit include projection output driving module, the outer amplification module of projection output, projection localization module and The projected image of the projection source input is divided into multiple projections by interactive motion capture module, the projection localization module Face, and then delivered by projection lens group on end face, described interactive motion capture module is by catching the action of viewing person The projected image of control projection source input output.
2. immersion interactive projection system according to claim 1, it is characterised in that:Described projection source input includes Mobile phone application APP and computer application.
CN201710072511.8A 2017-02-10 2017-02-10 Immersive interactive projection system Pending CN106713884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710072511.8A CN106713884A (en) 2017-02-10 2017-02-10 Immersive interactive projection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710072511.8A CN106713884A (en) 2017-02-10 2017-02-10 Immersive interactive projection system

Publications (1)

Publication Number Publication Date
CN106713884A true CN106713884A (en) 2017-05-24

Family

ID=58911066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710072511.8A Pending CN106713884A (en) 2017-02-10 2017-02-10 Immersive interactive projection system

Country Status (1)

Country Link
CN (1) CN106713884A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583003A (en) * 2009-04-08 2009-11-18 广东威创视讯科技股份有限公司 Projection display device capable of multi-surface image display
CN103543595A (en) * 2012-07-12 2014-01-29 希杰希界维株式会社 Multi-projection system
CN104202547A (en) * 2014-08-27 2014-12-10 广东威创视讯科技股份有限公司 Method for extracting target object in projection picture, projection interaction method and system thereof
CN105872508A (en) * 2016-06-21 2016-08-17 北京印刷学院 Projector based on intelligent cell phone and method for presenting multimedia

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101583003A (en) * 2009-04-08 2009-11-18 广东威创视讯科技股份有限公司 Projection display device capable of multi-surface image display
CN103543595A (en) * 2012-07-12 2014-01-29 希杰希界维株式会社 Multi-projection system
CN104202547A (en) * 2014-08-27 2014-12-10 广东威创视讯科技股份有限公司 Method for extracting target object in projection picture, projection interaction method and system thereof
CN105872508A (en) * 2016-06-21 2016-08-17 北京印刷学院 Projector based on intelligent cell phone and method for presenting multimedia

Similar Documents

Publication Publication Date Title
Azarbayejani et al. Real-time 3-D tracking of the human body
US9460339B2 (en) Combined color image and depth processing
Chen et al. Multi-object tracking using dynamical graph matching
Pan et al. A real-time multi-cue hand tracking algorithm based on computer vision
Prisacariu et al. 3D hand tracking for human computer interaction
CN104463914B (en) A kind of improved Camshift method for tracking target
CN109710071A (en) A kind of screen control method and device
CN108198221A (en) A kind of automatic stage light tracking system and method based on limb action
CN110102490A (en) The assembly line packages device and electronic equipment of view-based access control model technology
CN105279769B (en) A kind of level particle filter tracking method for combining multiple features
Frintrop General object tracking with a component-based target descriptor
WO2008150938A1 (en) Strategies for extracting foreground information using flash and no-flash image pairs
CN104778460B (en) A kind of monocular gesture identification method under complex background and illumination
Sun et al. A novel supervised level set method for non-rigid object tracking
US10803604B1 (en) Layered motion representation and extraction in monocular still camera videos
MohaimenianPour et al. Hands and faces, fast: mono-camera user detection robust enough to directly control a UAV in flight
CN110287894A (en) A kind of gesture identification method and system for ultra-wide angle video
CN110197121A (en) Moving target detecting method, moving object detection module and monitoring system based on DirectShow
CN109658441A (en) Foreground detection method and device based on depth information
CN109102520A (en) The moving target detecting method combined based on fuzzy means clustering with Kalman filter tracking
CN106713884A (en) Immersive interactive projection system
Akman et al. Multi-cue hand detection and tracking for a head-mounted augmented reality system
Kölsch An appearance-based prior for hand tracking
CN110377033A (en) A kind of soccer robot identification based on RGBD information and tracking grasping means
Robertini et al. Illumination-invariant robust multiview 3d human motion capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170524