CN103677274A - Interactive projection method and system based on active vision - Google Patents

Interactive projection method and system based on active vision Download PDF

Info

Publication number
CN103677274A
CN103677274A CN201310724304.8A CN201310724304A CN103677274A CN 103677274 A CN103677274 A CN 103677274A CN 201310724304 A CN201310724304 A CN 201310724304A CN 103677274 A CN103677274 A CN 103677274A
Authority
CN
China
Prior art keywords
skin
color
prior probability
colour
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310724304.8A
Other languages
Chinese (zh)
Other versions
CN103677274B (en
Inventor
沈三明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vtron Group Co Ltd
Original Assignee
Vtron Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Technologies Ltd filed Critical Vtron Technologies Ltd
Priority to CN201310724304.8A priority Critical patent/CN103677274B/en
Publication of CN103677274A publication Critical patent/CN103677274A/en
Application granted granted Critical
Publication of CN103677274B publication Critical patent/CN103677274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an interactive projection method and system based on active vision. The method includes the steps that frames needing interaction are projected onto any plane by a projector, information of operation on the projected interaction frames carried out by a user is collected through a shooting device, the shot information is transmitted to a processing unit to be analyzed and processed, and thus corresponding operation of the user is acquired so as to achieve human-computer interaction. An offline training mode is adopted for the processing unit, and thus the detection efficiency is promoted. When data are trained, a new training mechanism is provided, and thus a large amount of manual operation is reduced. In the detection process, a model which is updated in real time is adopted, namely, a current skin color model is only relevant to the last several frames of skin colors, and thus influences of illumination on the skin colors can be effectively eliminated, the algorithm efficiency is effectively improved, and the real time requirement can be totally met.

Description

A kind of interaction method and system based on active vision
Technical field
The present invention relates to technical field of computer vision, more specifically, relate to a kind of interaction method and system based on active vision.
Then the picture of required interaction is carried out to corresponding user's operation, and for user, watch to view field by projection device;
Background technology
Although the multimodal human-computer interaction technology of integrated use vision, the sense of hearing, sense of touch, sense of smell, the sense of taste etc. is applied more and more, yet, both hands are as action important in virtual reality system and perception relational model, and it still plays not replaceable effect in virtual reality system.At present, touch system is as a kind of up-to-date computer input apparatus, and it is the simplest, convenient, natural a kind of man-machine interaction mode at present.It has given multimedia with brand-new looks, is extremely attractive brand-new multimedia interactive equipment.Along with scientific and technological progress, the use of projector is more and more easier, and it can become any one plane a display screen, and projector is widely used in training conference, classroom instruction, cinema etc.
At present, camera and projection have entered into ordinary citizen's life gradually, and become current study hotspot by the automatic identification that projection and camera carry out gesture, by the automatic identification of gesture, have reached better man-machine interaction, make the use of projection convenient.At present the projection interactive system based on vision be substantially all based on fill-in light location or based on finger tip location, but mostly real-time is not strong in current existing scheme, robustness is not high.
Patent 200910190517.0 discloses a kind of projection exchange method based on finger, this invention extracts the profile of finger in video file by the information such as CF of hand, and the movement locus of record finger, then the movement locus of finger and the instruction in the instruction database pre-defining are contrasted, under judging, which kind of operational order is operation trace belong to, and reaches the object of man-machine interaction.Patent 200910197516.9 discloses a kind of finger identification method for interactive demonstration system, determines user's operation behavior for the interactive demonstration system based on video camera projection by the identification of finger.This invention is to take image processing techniques as basis, and the finger gesture feature while painting operation according to user, take its geometric space positional information as main identifying information, and the image that camera is taken is analyzed and processed, thereby identifies finger.In interactive projection system, in order to identify gesture, first to from picture, to cut apart and obtain hand region.And at present about being cutting apart of hand region a difficult point always.In projection interactive system, due to the irradiation of projector light, people's arm may present different colors, also likely comprises staff in projected picture, and this all gives and cuts apart arm and brought difficulty.
Summary of the invention
For overcoming at least one defect (deficiency) described in above-mentioned prior art, first the present invention provides a kind of recognition efficiency and the high interaction method based on active vision of accuracy.
Another object of the present invention is to propose a kind of interactive projection system based on active vision.
To achieve these goals, technical scheme of the present invention is as follows:
A kind of interaction method based on active vision, projection arrangement arrives arbitrary plane by the image projection of needs interaction, and adopt the operation information of the interactive picture that filming apparatus collection user goes out projection, the communication of shooting is carried out to analyzing and processing to treating apparatus simultaneously, to obtain operation corresponding to user, realize man-machine interaction
The process of described treating apparatus Treatment Analysis information is: select a color space to illumination-insensitive, adopt Bayes' assessment to extract hand region, and this hand region is followed the tracks of and the location of fingertip location, by range finding, judge whether finger contacts with projection screen again, to determine whether to carry out man-machine interaction;
Adopt Bayes' assessment to extract hand region, its concrete mode is:
By off-line training acquisition probability P (s), P (c), P (c|s) and P (s|c), wherein P (s) represents the prior probability of the colour of skin in training process, P (c) represents the prior probability of each color in training process, the prior probability that the pixel that P (c|s) expression pixel value is c is the colour of skin, P (s|c) represents after training, the probability that the pixel that pixel value is c is the colour of skin;
P(s|c)=P(c|s)P(s)/P(c) (1)
In off-line training process, obtain hysteresis threshold T max, T max=P (s|c); According to probability distribution graph, obtain, namely probability distribution is interval;
Adopt adaptive method to carry out Face Detection to extract hand region, judge the area of skin color point of current detection according to the area of skin color of nearest w frame identification, the prior probability of w frame is P recently w(s), P w(c), P w(c|s), P wherein w(s) represent the prior probability of the colour of skin in nearest w frame, P w(c) represent the prior probability of each color in nearest w frame, P w(c|s) represent the prior probability that pixel that in nearest w frame, pixel value is c is the colour of skin, another detect for the probability P of area of skin color ' be (s|c):
P'(s|c)=γP(s|c)+(1-γ)P w(s|c) (2)
P wherein w(s|c) through type (1) obtains, i.e. P w(s|c)=P w(c|s) P w(s)/P w(c), γ is a control coefrficient, and this control coefrficient is relevant with the training set of training stage;
Work as P'(s|c) > T max, whether detection is at present area of skin color, so far extracts hand region, otherwise does not belong to area of skin color.
About hand region extracting method, have much at present, the most frequently used have based on hand color, based on method hand shape and that demarcate based on color and physics.The color space that Face Detection adopts at present mainly contains following several, RGB, normalized RGB, HSV, YCrCb, YUV etc.In the method, while carrying out Face Detection, adopt the color space to illumination-insensitive, so just can be so that the detection robustness of the colour of skin be higher.After a given color space, the most simply judge that the way that the colour of skin which color consists of is that selected space is added to a restrictive condition, after threshold value T max.This hysteresis threshold T maxby experience, drawn, as at given a series of area of skin color image, so just can obtain the distribution of area of skin color.
This method adopts the Bayesian learning method based on off-line when extracting hand region, adopts subsequently non-Bayesian frame to follow the tracks of.With existing algorithm, compare, this method has advantages of following: (1) obtains the distribution of the colour of skin by the mode of off-line learning, and can greatly increase like this detection is speed.(2) adopted the complexion model of real-time change, this complexion model only in front of several frames relevant, therefore, although there is no complicated complexion model, even under the light conditions changing, also can robust, effectively identify area of skin color.(3) efficiency of algorithm that method adopts is very high, can reach real-time processing.
A kind of system that is applied to the described interaction method based on active vision, comprise for by the image projection of needs interaction to the projection arrangement of arbitrary plane, for gathering the filming apparatus of operation information of the interactive picture that user goes out projection and the treating apparatus that carries out information process analysis
The process of described treating apparatus Treatment Analysis information is: select a color space to illumination-insensitive, adopt Bayes' assessment to extract hand region, and this hand region is followed the tracks of and the location of fingertip location, by range finding, judge whether finger contacts with projection screen again, to determine whether to carry out man-machine interaction;
Adopt Bayes' assessment to extract hand region, its concrete mode is:
By off-line training acquisition probability P (s), P (c), P (c|s) and P (s|c), wherein P (s) represents the prior probability of the colour of skin in training process, P (c) represents the prior probability of each color in training process, the prior probability that the pixel that P (c|s) expression pixel value is c is the colour of skin, P (s|c) represents after training, the probability that the pixel that pixel value is c is the colour of skin;
P(s|c)=P(c|s)P(s)/P(c) (1)
In off-line training process, obtain hysteresis threshold T max, T max=P (s|c);
Adopt adaptive method to carry out Face Detection to extract hand region, judge the area of skin color point of current detection according to the area of skin color of nearest w frame identification, the prior probability of w frame is P recently w(s), P w(c), P w(c|s), P wherein w(s) represent the prior probability of the colour of skin in nearest w frame, P w(c) represent the prior probability of each color in nearest w frame, P w(c|s) represent the prior probability that pixel that in nearest w frame, pixel value is c is the colour of skin, another detect for the probability P of area of skin color ' be (s|c):
P'(s|c)=γP(s|c)+(1-γ)P w(s|c) (2)
P wherein w(s|c) through type (1) obtains, i.e. P w(s|c)=P w(c|s) P w(s)/P w(c), γ is a control coefrficient, and this control coefrficient is relevant with the training set of training stage;
Work as P'(s|c) > T max, whether detection is at present area of skin color, so far extracts hand region, otherwise does not belong to area of skin color.
Compared with prior art, the beneficial effect of technical solution of the present invention is: the present invention adopts the mode of off-line training, and detection efficiency is improved.When training data, a kind of new training mechanism has been proposed, reduced a large amount of manual operations.When detecting, adopted the model of real-time update, the colour of skin of a current complexion model several frame in front is relevant, can effectively remove the impact of illumination on the colour of skin like this.And the efficiency of algorithm that this patent proposes is very high, can reach real-time requirement completely.
Accompanying drawing explanation
Fig. 1 is interactive projection system schematic diagram of the present invention.
Fig. 2 is the process flow diagram for the treatment of apparatus analyzing and processing information of the present invention.
Fig. 3 is for adopting triangulation measuring principle figure.
Embodiment
Accompanying drawing, only for exemplary illustration, can not be interpreted as the restriction to this patent;
For better explanation the present embodiment, some parts of accompanying drawing have omission, zoom in or out, and do not represent the size of actual product;
To those skilled in the art, in accompanying drawing some known configurations and explanation thereof may to omit be understandable.
Be illustrated in figure 1 optical projection system schematic diagram of the present invention, comprise projector, two cameras, computing equipment.The main effect of projector is thrown into plane arbitrarily by picture exactly.The effect of camera is to take the picture that projection is gone out, and is then transferred in computing equipment.Computing equipment is mainly the data analysis to collected by camera, and the process flow diagram of its analyzing and processing as shown in Figure 2.
First, be from complex background, to extract staff.From complex background, extract staff and be exactly from entire image by corresponding staff extracting section out, it relates to cutting apart of image and to two problems of staff area judging.Cutting apart that image generally belongs to is the feature extraction of low level, has mainly utilized geological information, colouring information and the movable information of staff.Wherein, geological information comprises shape, profile of staff etc.; Movable information refers to the movement locus of staff.The accurate location fingertip location below that is extracted as in staff region is laid a good foundation, and conventionally can adopt the methods such as gray threshold method, edge detection operator method, method of difference to realize.In the present invention, the impact of irradiating in order to remove projection ray adopts the method based on Bayesian Estimation when arm extracts.After finding the position of hand, start to keep the tracking to hand region, need to locate accurately finger tip simultaneously.And also have much about the localization method of finger tip, there are special marking method, profile analysis, Hough circle transformation etc.
Finally, need computing camera to the distance of finger, judge whether finger contacts with projection screen.Native system mainly adopts the method for triangulation, and the three-dimensional rebuilding method based on binocular vision calculates this distance.
In the present embodiment, adopt the method for " Bayesian Estimation " to carry out colour of skin extraction:
At present the detection method about hand region has a lot, the most frequently used have based on hand color, based on method hand shape and based on color and physics demarcation.It is more responsive to light application ratio that current algorithm exists maximum problem, is exactly secondly the efficiency of algorithm.The color space that Face Detection adopts at present mainly contains following several, RGB, normalized RGB, HSV, YCrCb, YUV etc.When carrying out Face Detection, can pay the utmost attention to the color space to illumination-insensitive, so just can be so that the detection robustness of the colour of skin be higher.After a given color space, the most simply judge that the way that the colour of skin which color consists of is that selected space is added to a restrictive condition, i.e. hysteresis threshold.This hysteresis threshold is drawn by experience, as at given a series of area of skin color image, so just can obtain the distribution of area of skin color.The present embodiment adopts the Bayesian learning method based on off-line to detect area of skin color, adopts subsequently non-Bayesian frame to follow the tracks of.With existing algorithm, compare, the present invention has following advantage: (1) obtains the distribution of the colour of skin by the mode of off-line learning, and can greatly increase like this detection is speed.(2) adopted the complexion model of real-time change, this complexion model only in front of several frames relevant, therefore, although there is no complicated complexion model, even under the light conditions changing, also can robust, effectively identify area of skin color.(3) efficiency of algorithm of the present invention is very high, can reach real-time processing.
1.1 Face Detection
Face Detection mainly comprises following components: (a) estimate the probability that certain pixel belongs to the colour of skin;
(b) according to probability distribution graph, obtain hysteresis threshold T max; Face Detection has adopted bayes method, mainly comprises off-line training process and the adaptive testing process of iteration.
A. train and testing mechanism
First need the given a series of picture that comprises area of skin color, and need the manual colour of skin part of choosing out in picture, employing color space is YUV4:2:2.But in the time of the concrete training of the present embodiment and identification, do not adopt Y component, mainly contain following two reasons: (1) Y component is relevant with the brightness of pixel, therefore remove Y component, can effectively reduce illumination to the impact detecting.(2) remove after Y component, with respect to YUV, the dimension of image has reduced, so can greatly promote the efficiency of whole process.
Suppose that a pixel is I (x, y), its pixel value is c=c (x, y), and training process mainly calculates following value: the prior probability P (s) of (1) colour of skin; (2) the prior probability P (c) that each color occurs in training data; (3) the prior probability P (c|s) that the pixel that pixel value is c is the colour of skin; After training, the probability P that the pixel that pixel value is c is the colour of skin (s|c) can obtain by bayesian criterion:
P(s|c)=P(c|s)P(s)/P(c) (1)
Can determine hysteresis threshold T max, T max=P (s|c), in concrete enforcement, the probability that belongs to the colour of skin when a certain pixel is greater than hysteresis threshold T max, this point is the colour of skin.
A. the off-line training after simplifying
Training is mainly to complete the in the situation that of off-line, so it can not affect the online problems such as detection efficiency.Yet will obtain sufficient training data is a job very consuming time.In order to address this problem, the present embodiment adopts a kind of adaptive training process.First, with one group of data set seldom, train, then by a large amount of data and hysteresis threshold, carry out the identification of area of skin color, and real-time renewal prior probability P (s), P (c) and P (c|s).Then constantly by the threshold value after upgrading, the area of skin color in picture and non-area of skin color are separated.If sorter training obtains wrong result, need manpower intervention to correct this mistake, but the method still can complete most of needed work.If want more accurate result, can input more training data, if the result of training has reached demand, immediately deconditioning.
B. the self-adapting detecting colour of skin
Even if adopting UV model, in the situation that illumination constantly changes, still can obtain some wrong recognition results.In order to address this problem, need to judge current area of skin color point according to the area of skin color of before several frames identifications.Therefore, this patent has adopted two groups of prior probabilities: i.e. off-line training prior probability P (s), P (c) and P (c|s).The prior probability P of nearest w frame w(s), P w(c), P w(c|s).Wherein the prior probability of nearest w frame can reflect current colour of skin state, and also can better adapt to current light conditions.The colour of skin is defined by following formula:
P'(s|c)=γP(s|c)+(1-γ)P w(s|c) (2)
P wherein wand P (s|c) w(s|c) all by (1) formula, can be obtained, be respectively the prior probability of whole training set and the prior probability of nearest w frame.γ is a control coefrficient, and this control coefrficient is relevant with the training set of detection-phase.Work as P'(s|c) > T max, whether detection is at present area of skin color, so far extracts hand region, otherwise does not belong to area of skin color.
1.2 colours of skin are followed the tracks of
Owing to just following the tracks of for hand region, it is monotrack, the present embodiment has adopted Camshift algorithm, current this algorithm very ripe applying to during motion tracking and image cut apart, it does meanShift computing by all frame numbers of video, and meanShift algorithm is the gradient ascent algorithm of a variable step, and by the result of previous frame, search for size and the center of window, as the initial value of next frame meanshift algorithm search window.With this iteration, go down, just can realize the tracking to target.Algorithmic procedure is as follows:
(1) initialization search window; (2) calculate the color probability distribution of search window; (3) operation meanShift algorithm, obtains size and the position of searching for window; (4) in next frame video image, with the initial value of step (3), reinitialize size and the position of search window, then jump to step (2) and proceed.
1.3 find finger tip
After taking out arm, task is below to locate the position of finger tip in image, the method of searching finger tip has a variety of, as by calculating the approximate K curvature of profile, gets extreme value obtain fingertip location according to the finger tip point K of place curvature. and the present invention finds finger tip by following steps:
1) find out largest contours and fill this profile, so just having obtained not having noisy arm foreground image;
2) calculate the convex closure of profile;
3) calculate arm centre of gravity place, on convex closure, find out several candidate points of curvature maximum;
4) using apart from centre of gravity place candidate point farthest as finger tip.
1.4 range finding
After finding fingertip location, need to obtain finger tip to the distance of camera finger tip.According to solid geometry principle, during the work that first will do, carry out the demarcation of projector and video camera.Accurately and simply calibration process is that projector-video camera forms the key point that active vision system carries out three-dimensional measurement, and calibrating camera method is very ripe, and what generally adopt at present is the scaling method of Zhang Zhengyou.Calibrating camera only need to be used a plane gridiron pattern.As long as find corresponding relation between projected image three-dimensional coordinate point and the two-dimensional points of projected image just can solve inner parameter and the external parameter of projector.
Adopt in the present embodiment following steps to carry out labeling projection instrument:
S1. calibrating camera;
S2. prepare a blank, on blank, post a papery gridiron pattern;
S3. controlling projection instrument projects a gridiron pattern and is radiated on blank;
S4. extract respectively two tessellated angle points;
S5. according to papery X-comers, calculate blank place plane;
S6. use the video camera of having demarcated to calculate the three-dimensional coordinate of projection X-comers;
S7. the original image in conjunction with the projection of angle point three-dimensional coordinate point and projector calculates the inner external parameter that participates in of projector;
The inside that can obtain projector through above step participates in external parameter.Just can utilize the method for triangulation to calculate finger tip to the distance of camera, as shown in Figure 3.Wherein P is observation station, O l, O rfor camera, T is two distances between camera, and Z is that two cameras are to the vertical range of observation station, the focal length that f is camera, x rand x lrepresent respectively the horizontal coordinate of observation station in image. the horizontal coordinate that represents respectively the picture centre that two cameras are taken.
Utilize similar triangles can be easy to derive Z value, as shown in Figure 3:
T - ( x l - x r ) Z - f = T Z ⇒ Z = fT x l - x r - - - ( 4 )
Utilize the principle of triangulation, can also calculate easily the distance of camera distance projection screen, thereby can extrapolate finger apart from the distance of screen, if the distance of pointing apart from screen is less than a certain specific threshold, just thinks and have click event to occur.Position by finger tip in screen, and geometric calibration before, can be positioned to cursor of mouse finger tip place analog mouse click event.Realize interpersonal alternately, so just reached the object that any one projection plane is become to a touch-screen.
The present invention uses the method for " Bayesian Estimation " to carry out separated arm, and motion tracking system is joined in interactive projection system, and the efficiency of algorithm is improved.A new data training mechanism is proposed again; And when Face Detection, real-time renewal complexion model, all improves the robustness of algorithm and efficiency greatly.The present invention is simultaneously by the checking of test of many times, and result shows that this system processing speed is fast, reliable and stable.
Obviously, the above embodiment of the present invention is only for example of the present invention is clearly described, and is not the restriction to embodiments of the present invention.For those of ordinary skill in the field, can also make other changes in different forms on the basis of the above description.Here exhaustive without also giving all embodiments.All any modifications of doing within the spirit and principles in the present invention, be equal to and replace and improvement etc., within all should being included in the protection domain of the claims in the present invention.

Claims (10)

1. the interaction method based on active vision, projection arrangement arrives arbitrary plane by the image projection of needs interaction, and adopt the operation information of the interactive picture that filming apparatus collection user goes out projection, the communication of shooting is carried out to analyzing and processing to treating apparatus simultaneously, to obtain operation corresponding to user, realize man-machine interaction
It is characterized in that, the process of described treating apparatus Treatment Analysis information is: select a color space to illumination-insensitive, adopt Bayes' assessment to extract hand region, and this hand region is followed the tracks of and the location of fingertip location, by range finding, judge whether finger contacts with projection screen again, to determine whether to carry out man-machine interaction;
Adopt Bayes' assessment to extract hand region, its concrete mode is:
By off-line training acquisition probability P (s), P (c), P (c|s) and P (s|c), wherein P (s) represents the prior probability of the colour of skin in training process, P (c) represents the prior probability of each color in training process, the prior probability that the pixel that P (c|s) expression pixel value is c is the colour of skin, P (s|c) represents after training, the probability that the pixel that pixel value is c is the colour of skin;
P(s|c)=P(c|s)P(s)/P(c) (1)
In off-line training process, obtain hysteresis threshold T max, T max=P (s|c);
Adopt adaptive method to carry out Face Detection to extract hand region, judge the area of skin color point of current detection according to the area of skin color of nearest w frame identification, the prior probability of w frame is P recently w(s), P w(c), P w(c|s), P wherein w(s) represent the prior probability of the colour of skin in nearest w frame, P w(c) represent the prior probability of each color in nearest w frame, P w(c|s) represent the prior probability that pixel that in nearest w frame, pixel value is c is the colour of skin, another detect for the probability P of area of skin color ' be (s|c):
P'(s|c)=γP(s|c)+(1-γ)P w(s|c) (2)
P wherein w(s|c) through type (1) obtains, i.e. P w(s|c)=P w(c|s) P w(s)/P w(c), γ is a control coefrficient, and this control coefrficient is relevant with the training set of training stage;
Work as P'(s|c) > T max, whether detection is at present area of skin color, so far extracts hand region, otherwise does not belong to area of skin color.
2. the interaction method based on active vision according to claim 1, described color space adopts YUV4:2:2.
3. the interaction method based on active vision according to claim 2, described YUV4:2:2 color space does not adopt Y component.
4. the interaction method based on active vision according to claim 1, the detailed process of described off-line training is: select one group of a small amount of data to train, adopt again mass data and hysteresis threshold to carry out the identification of area of skin color, and real-time renewal prior probability P (s), P (c) and P (c|s), constantly by the hysteresis threshold after upgrading, the area of skin color in picture and non-area of skin color are separated simultaneously.
5. the interaction method based on active vision according to claim 1, the described implementation that hand region is followed the tracks of is: adopt Camshift algorithm, be about to the result of previous frame, search for size and the center of window, as the initial value of next frame meanshift algorithm search window; With this iteration, go down, realize the tracking to target.
6. the interaction method based on active vision according to claim 5, the described specific implementation that hand region is followed the tracks of is:
1) initialization search window;
2) calculate the color probability distribution of search window;
3) operation meanShift algorithm, obtains size and the position of searching for window;
4) in next frame video image, with the initial value of step 3), reinitialize size and the position of search window, then jump to step 2) proceed.
7. the interaction method based on active vision according to claim 1, the mode of described location fingertip location is:
A) find out largest contours and fill this profile, to have obtained not having noisy arm foreground image;
B) calculate the convex closure of profile;
C) calculate arm centre of gravity place, on convex closure, find out several candidate points of curvature maximum;
D) using apart from centre of gravity place candidate point farthest as finger tip.
8. the interaction method based on active vision according to claim 1, described range finding is to measure finger tip to the distance of camera finger tip, specifically mode is: labeling projection instrument and video camera, wherein the scaling method of projector is:
S1. calibrating camera;
S2. prepare a blank, on blank, post a papery gridiron pattern;
S3. controlling projection instrument projects a gridiron pattern and is radiated on blank;
S4. extract respectively two tessellated angle points;
S5. according to papery X-comers, calculate blank place plane;
S6. use the video camera of having demarcated to calculate the three-dimensional coordinate of projection X-comers;
S7. the original image in conjunction with the three-dimensional coordinate point of angle point and the projection of projector calculates the inner external parameter that participates in of projector;
S8. utilize the method for triangulation to calculate finger tip to the distance Z of camera; When being less than a certain specific threshold apart from Z, thinking, there is click event to occur.
9. a system that is applied to the interaction method based on active vision described in claim 1 to 8 any one, comprise for by the image projection of needs interaction to the projection arrangement of arbitrary plane, for gathering the filming apparatus of operation information of the interactive picture that user goes out projection and the treating apparatus that carries out information process analysis
It is characterized in that, the process of described treating apparatus Treatment Analysis information is: select a color space to illumination-insensitive, adopt Bayes' assessment to extract hand region, and this hand region is followed the tracks of and the location of fingertip location, by range finding, judge whether finger contacts with projection screen again, to determine whether to carry out man-machine interaction;
Adopt Bayes' assessment to extract hand region, its concrete mode is:
By off-line training acquisition probability P (s), P (c), P (c|s) and P (s|c), wherein P (s) represents the prior probability of the colour of skin in training process, P (c) represents the prior probability of each color in training process, the prior probability that the pixel that P (c|s) expression pixel value is c is the colour of skin, P (s|c) represents after training, the probability that the pixel that pixel value is c is the colour of skin;
P(s|c)=P(c|s)P(s)/P(c) (1)
In off-line training process, obtain hysteresis threshold T max, T max=P (s|c);
Adopt adaptive method to carry out Face Detection to extract hand region, judge the area of skin color point of current detection according to the area of skin color of nearest w frame identification, the prior probability of w frame is P recently w(s), P w(c), P w(c|s), P wherein w(s) represent the prior probability of the colour of skin in nearest w frame, P w(c) represent the prior probability of each color in nearest w frame, P w(c|s) represent the prior probability that pixel that in nearest w frame, pixel value is c is the colour of skin, another detect for the probability P of area of skin color ' be (s|c):
P'(s|c)=γP(s|c)+(1-γ)P w(s|c) (2)
P wherein w(s|c) through type (1) obtains, i.e. P w(s|c)=P w(c|s) P w(s)/P w(c), γ is a control coefrficient, and this control coefrficient is relevant with the training set of training stage;
Work as P'(s|c) > T max, whether detection is at present area of skin color, so far extracts hand region, otherwise does not belong to area of skin color.
10. system according to claim 9, is characterized in that, described filming apparatus is two cameras.
CN201310724304.8A 2013-12-24 2013-12-24 A kind of interaction method and system based on active vision Active CN103677274B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310724304.8A CN103677274B (en) 2013-12-24 2013-12-24 A kind of interaction method and system based on active vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310724304.8A CN103677274B (en) 2013-12-24 2013-12-24 A kind of interaction method and system based on active vision

Publications (2)

Publication Number Publication Date
CN103677274A true CN103677274A (en) 2014-03-26
CN103677274B CN103677274B (en) 2016-08-24

Family

ID=50315082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310724304.8A Active CN103677274B (en) 2013-12-24 2013-12-24 A kind of interaction method and system based on active vision

Country Status (1)

Country Link
CN (1) CN103677274B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202547A (en) * 2014-08-27 2014-12-10 广东威创视讯科技股份有限公司 Method for extracting target object in projection picture, projection interaction method and system thereof
CN104331158A (en) * 2014-10-29 2015-02-04 山东大学 Gesture-controlled human-computer interaction method and device
CN104778460A (en) * 2015-04-23 2015-07-15 福州大学 Monocular gesture recognition method under complex background and illumination
WO2015149712A1 (en) * 2014-04-03 2015-10-08 华为技术有限公司 Pointing interaction method, device and system
CN105371784A (en) * 2015-12-24 2016-03-02 吉林大学 Machine vision based holographic man-machine interaction system for automotive inspection
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN106204604A (en) * 2016-04-29 2016-12-07 北京仁光科技有限公司 Projection touch control display apparatus and exchange method thereof
CN106991417A (en) * 2017-04-25 2017-07-28 华南理工大学 A kind of visual projection's interactive system and exchange method based on pattern-recognition
CN107343184A (en) * 2016-05-03 2017-11-10 中兴通讯股份有限公司 Projecting apparatus processing method, device and terminal
CN109190357A (en) * 2018-08-30 2019-01-11 袁精侠 A kind of gesture identifying code implementation method carrying out man-machine verifying merely with cache resources
CN109410274A (en) * 2018-10-08 2019-03-01 武汉工程大学 Typical noncooperative target key point real-time location method under the conditions of a kind of high frame frequency
CN109683719A (en) * 2019-01-30 2019-04-26 华南理工大学 A kind of visual projection's exchange method based on YOLOv3
CN109934105A (en) * 2019-01-30 2019-06-25 华南理工大学 A kind of virtual elevator interactive system and method based on deep learning
CN111886567A (en) * 2018-03-07 2020-11-03 日本电气方案创新株式会社 Operation input device, operation input method, and computer-readable recording medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430760A (en) * 2008-11-18 2009-05-13 北方工业大学 Human face super-resolution processing method based on linear and Bayesian probability mixed model
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101430760A (en) * 2008-11-18 2009-05-13 北方工业大学 Human face super-resolution processing method based on linear and Bayesian probability mixed model
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHAI D,ETC: "A Bayesian skin/non-skin color classifier using non-parametric density estimation", 《PROCEEDINGS OF THE 2003 INTERNATIONAL SYMPOSIUM ON》 *
刘玉进等: "一种肤色干扰下的变形手势跟踪方法", 《计算机工程与应用》 *
王兴杰: "基于机器视觉的运动目标检测技术的研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104978012B (en) * 2014-04-03 2018-03-16 华为技术有限公司 One kind points to exchange method, apparatus and system
WO2015149712A1 (en) * 2014-04-03 2015-10-08 华为技术有限公司 Pointing interaction method, device and system
CN104978012A (en) * 2014-04-03 2015-10-14 华为技术有限公司 Pointing interactive method, device and system
US10466797B2 (en) 2014-04-03 2019-11-05 Huawei Technologies Co., Ltd. Pointing interaction method, apparatus, and system
CN104202547A (en) * 2014-08-27 2014-12-10 广东威创视讯科技股份有限公司 Method for extracting target object in projection picture, projection interaction method and system thereof
CN104202547B (en) * 2014-08-27 2017-10-10 广东威创视讯科技股份有限公司 Method, projection interactive approach and its system of target object are extracted in projected picture
CN104331158A (en) * 2014-10-29 2015-02-04 山东大学 Gesture-controlled human-computer interaction method and device
CN104331158B (en) * 2014-10-29 2018-05-25 山东大学 The man-machine interaction method and device of a kind of gesture control
CN104778460A (en) * 2015-04-23 2015-07-15 福州大学 Monocular gesture recognition method under complex background and illumination
CN104778460B (en) * 2015-04-23 2018-05-04 福州大学 A kind of monocular gesture identification method under complex background and illumination
CN105371784A (en) * 2015-12-24 2016-03-02 吉林大学 Machine vision based holographic man-machine interaction system for automotive inspection
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN105869166B (en) * 2016-03-29 2018-07-10 北方工业大学 A kind of human motion recognition method and system based on binocular vision
CN106204604A (en) * 2016-04-29 2016-12-07 北京仁光科技有限公司 Projection touch control display apparatus and exchange method thereof
CN106204604B (en) * 2016-04-29 2019-04-02 北京仁光科技有限公司 Project touch control display apparatus and its exchange method
CN107343184A (en) * 2016-05-03 2017-11-10 中兴通讯股份有限公司 Projecting apparatus processing method, device and terminal
CN106991417A (en) * 2017-04-25 2017-07-28 华南理工大学 A kind of visual projection's interactive system and exchange method based on pattern-recognition
WO2018196370A1 (en) * 2017-04-25 2018-11-01 华南理工大学 Pattern recognition-based visual projection interaction system and interaction method
CN111886567A (en) * 2018-03-07 2020-11-03 日本电气方案创新株式会社 Operation input device, operation input method, and computer-readable recording medium
CN111886567B (en) * 2018-03-07 2023-10-20 日本电气方案创新株式会社 Operation input device, operation input method, and computer-readable recording medium
CN109190357A (en) * 2018-08-30 2019-01-11 袁精侠 A kind of gesture identifying code implementation method carrying out man-machine verifying merely with cache resources
CN109190357B (en) * 2018-08-30 2021-08-06 袁精侠 Gesture verification code implementation method for man-machine verification by only utilizing cache resources
CN109410274A (en) * 2018-10-08 2019-03-01 武汉工程大学 Typical noncooperative target key point real-time location method under the conditions of a kind of high frame frequency
CN109410274B (en) * 2018-10-08 2022-03-15 武汉工程大学 Method for positioning typical non-cooperative target key points in real time under high frame frequency condition
CN109683719A (en) * 2019-01-30 2019-04-26 华南理工大学 A kind of visual projection's exchange method based on YOLOv3
CN109934105A (en) * 2019-01-30 2019-06-25 华南理工大学 A kind of virtual elevator interactive system and method based on deep learning

Also Published As

Publication number Publication date
CN103677274B (en) 2016-08-24

Similar Documents

Publication Publication Date Title
CN103677274B (en) A kind of interaction method and system based on active vision
CN110232311B (en) Method and device for segmenting hand image and computer equipment
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN108446585A (en) Method for tracking target, device, computer equipment and storage medium
CN104317391A (en) Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
EP3035235B1 (en) Method for setting a tridimensional shape detection classifier and method for tridimensional shape detection using said shape detection classifier
CN104992171A (en) Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN102831382A (en) Face tracking apparatus and method
CN102831439A (en) Gesture tracking method and gesture tracking system
CN103383731A (en) Projection interactive method and system based on fingertip positioning and computing device
CN103105924B (en) Man-machine interaction method and device
CN112330730B (en) Image processing method, device, equipment and storage medium
Tan et al. Dynamic hand gesture recognition using motion trajectories and key frames
CN103598870A (en) Optometry method based on depth-image gesture recognition
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN105046206A (en) Pedestrian detection method and apparatus based on moving associated prior information in videos
AU2021255130B2 (en) Artificial intelligence and computer vision powered driving-performance assessment
CN105912126A (en) Method for adaptively adjusting gain, mapped to interface, of gesture movement
CN116630394A (en) Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
CN111400423B (en) Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry
CN108053425B (en) A kind of high speed correlation filtering method for tracking target based on multi-channel feature
CN103761011A (en) Method, system and computing device of virtual touch screen
CN116659518B (en) Autonomous navigation method, device, terminal and medium for intelligent wheelchair
KR101313879B1 (en) Detecting and Tracing System of Human Using Gradient Histogram and Method of The Same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address

Address after: Kezhu road high tech Industrial Development Zone, Guangzhou city of Guangdong Province, No. 233 510670

Patentee after: Wei Chong group Limited by Share Ltd

Address before: 510663 No. 6, color road, hi tech Industrial Development Zone, Guangdong, Guangzhou, China

Patentee before: Guangdong Weichuangshixun Science and Technology Co., Ltd.

CP03 Change of name, title or address