CN102081918B - Video image display control method and video image display device - Google Patents

Video image display control method and video image display device Download PDF

Info

Publication number
CN102081918B
CN102081918B CN 201010612804 CN201010612804A CN102081918B CN 102081918 B CN102081918 B CN 102081918B CN 201010612804 CN201010612804 CN 201010612804 CN 201010612804 A CN201010612804 A CN 201010612804A CN 102081918 B CN102081918 B CN 102081918B
Authority
CN
China
Prior art keywords
palm
image
hand shape
gesture
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201010612804
Other languages
Chinese (zh)
Other versions
CN102081918A (en
Inventor
方伟
赵勇
袁誉乐
罗卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rui Technology Co., Ltd.
Original Assignee
Peking University Shenzhen Graduate School
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Shenzhen Graduate School filed Critical Peking University Shenzhen Graduate School
Priority to CN 201010612804 priority Critical patent/CN102081918B/en
Publication of CN102081918A publication Critical patent/CN102081918A/en
Application granted granted Critical
Publication of CN102081918B publication Critical patent/CN102081918B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video image display control method and a video image display device, and the video image display control method comprises the following steps: collecting a scene before the display device in a real-time manner; acquiring a human body region image from a collected real-time scene image; further performing hand gesture detection on the human body region image, and determining a control command in a hand gesture database, which corresponds to a hand gesture according to detection result; and finally outputting the control command, and utilizing the video image display device to control a video image to be displayed on the display device according to the control command, thereby completing active interaction between a user and the video image, enabling the user to select required information according to interests, improving the interaction efficiency between the user and advertisement content and simultaneously bringing a new experience to the user.

Description

A kind of video image display control method and video image display
Technical field
The present invention relates to image and process and field of human-computer interaction, relate in particular to a kind of video image display control method and video image display.
Background technology
All kinds of advertising media dog-eat-dog in recent years, and digital billboard stands in the breach as a kind of brand-new advertising media.The numeral billboard is as the product under a kind of advertising media Developing Digital trend, it is a kind of Digital Media System of issuing various advertising messages by terminal presentation facility, having ad content dynamically throws in, the autonomous service of satisfying personalized and differentiation, specific crowd is carried out the characteristic of advertisement information play in specific place and time, thereby obtained good demonstration effect, in the market, the market application potential of the public place that converges of supermarket, hotel, medical treatment, movie theatre and other stream of peoples is very large, has wide market outlook.
Current digital billboard all is by the automatic broadcast advertisement picture of predefined broadcast mode or video cartoon fragment, when pedestrian way is out-of-date, can only see the content that current billboard is shown, can not see own interested ad content with the wish of oneself.If wonder the content of the advertisement that other do not show, need to stop and wait for the long time, this is a kind of passive receive and can't predicts the interactive mode of ad content, and people often can not be easy to obtain the useful ad content oneself wanted, and the effect of advertisement is also just had a greatly reduced quality like this.
Summary of the invention
The main technical problem to be solved in the present invention is that a kind of video image display control method and video image display are provided.The present invention has realized that the active between user and the video image is mutual, allows user oneself easily select interested information, thus the interactive efficiency of the information of raising.
For solving the problems of the technologies described above, the technical solution used in the present invention is as follows:
A kind of video image display control method comprises step:
Real-time scene image before A, the collection display device;
B, described real-time scene image is carried out human detection, and obtain the human region image;
C, in described human region image, detect gesture;
D, determine the corresponding control command of described gesture;
E, according to the demonstration of described control command control video image on display device.
Wherein, described step B comprises: thus the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained.
Further, the step of described human body area image comprises:
The current real-time scene picture frame that obtains and the reference picture according to the background model gained are carried out the subduction operation of Pixel-level, obtain difference image;
Described difference image is carried out binary conversion treatment, obtain the binaryzation difference image;
Described binaryzation difference image is carried out morphology to be processed;
To meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, obtain connected region;
Judge whether each connected region is the noise range, if then deletion;
The area image that will be comprised of all connected regions that stay at last is as the human region image, and exports described human region image.
Further, above-mentioned method also comprises step: judge whether each pixel in the current real-time scene picture frame that obtains belongs to the pixel in the human region that detects, if then background model remains unchanged, otherwise update background module.
Wherein, described gesture comprises the hand shape of palm, and described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Hand shape sorter according to the hand-shaped characteristic of the described palm that extracts and in advance foundation carries out the identification of hand shape, judges whether the hand shape of described palm is effective hand shape;
In step D, when the hand shape of judging this palm is effective hand shape, determine control command corresponding to this effective hand shape according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described step C comprises:
Described human region image is carried out the palm target detection, and obtain the palm target area image;
Described palm target area image is carried out hand-shaped characteristic to be extracted;
Hand shape sorter according to the hand-shaped characteristic of this palm that extracts and in advance foundation carries out the identification of hand shape, judges whether the hand shape of this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm;
Detect the movement locus of current activation palm, determine the type of sports of current activation palm;
In step D, in the gesture database of setting up in advance, determine corresponding control command according to the type of sports of this effective hand shape and current activation palm;
In step e, switch corresponding video image or the video image of current demonstration is operated according to described control command.
The step of further, described human region image being carried out the palm target detection comprises:
Described human region image section is carried out Face Detection, obtain and comprise people's face, arm or palm area image;
Obtain the area image of arm and/or palm according to the Face Detection model of setting up in advance;
In the area image of arm and/or palm, detect palm.
Further, the described step that detects palm in the area image of arm and/or palm comprises:
Judge that whether the length breadth ratio of area image of described arm and/or palm is greater than 2, if judge that then this zone is arm and palm area image, otherwise be the palm area image;
When being judged to be arm and palm area image, described arm and palm area image are carried out rim detection, obtain marginal information, obtain region contour;
Described region contour is carried out minimum external ellipse fitting, obtain the information of described external ellipse;
According to the information of described external ellipse, obtain the directional information of described region contour, thereby finally obtain the sensing of described arm and palm;
Described arm regions image and the palm area image that obtains sensing carried out the image rectification, so that arm and palm are oriented to straight up;
Described arm and palm area image after rectification carry out the palm detection and localization, obtain the palm target area image.
Corresponding to above-mentioned method, the present invention also provides a kind of video image display, comprising:
Camera head is used for gathering the front real-time scene image of display device;
Human body detection device is used for described real-time scene image is carried out human detection, obtains the human region image;
Hand gesture detecting device is used for detecting gesture at described human region image;
Control command is determined device, is used for determining the corresponding control command of gesture;
Image display control apparatus is used for according to the demonstration of described control command control video image on display device.
Further, thus be used for will the current real-time scene picture frame that obtains and compare according to the reference picture of background model gained and to detect the human region image for described human body detection device.
Above-mentioned video image display also comprises the context update device, is used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the human region that detects, if then background model remains unchanged, otherwise update background module.
Wherein, described gesture comprises the hand shape of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit is used for that described palm target area image is carried out hand-shaped characteristic and extracts;
Hand shape recognition unit is used for carrying out the identification of hand shape according to the hand shape sorter of the hand-shaped characteristic of this palm that extracts and in advance foundation, judges whether the hand shape of this palm is effective hand shape;
Described control command determines that device when the hand shape of judging this palm is effective hand shape, determines control command corresponding to this effective hand shape according to the gesture database of setting up in advance; Or
Described gesture comprises the hand shape of palm and the movement locus of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit carries out hand-shaped characteristic to described palm target area image and extracts;
Hand shape recognition unit, hand shape sorter according to the hand-shaped characteristic of this palm that extracts and in advance foundation carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm;
The palm tracking cell for detection of the movement locus of current activation palm, is determined the type of sports of current activation palm;
Described control command determines that device is used for the gesture database of setting up in advance, determines corresponding control command according to the type of sports of this effective hand shape and current activation palm;
Described image display control apparatus switches corresponding video image according to described control command or the video image of current demonstration is operated.
The invention has the beneficial effects as follows:
Video image display control method of the present invention and video image display, by the scene before the video image display is gathered, and extraction human region image wherein, from this human region image, extract again the corresponding gesture of user, thereby determine its corresponding control command according to this gesture, video image display is controlled corresponding video image according to this control command again and is shown, thereby the active of having finished between user and the video image is mutual.The user can according to own interested content on one's own initiative, optionally check by method and apparatus of the present invention.The technical solution used in the present invention is so that realized between user and the device improved the interactive efficiency between video image and the user, thereby the effect of publicity that has improved video image self being brought complete new experience to the user simultaneously initiatively alternately.
Description of drawings
Fig. 1 is the block diagram of a kind of embodiment of video image display of the present invention;
Fig. 2 is the block diagram of the another kind of embodiment of video image display of the present invention;
Fig. 3 a is the block diagram of a kind of embodiment of hand gesture detecting device among Fig. 1;
Fig. 3 b is the block diagram of the another kind of embodiment of hand gesture detecting device among Fig. 1;
Fig. 4 is the schematic diagram of a kind of embodiment of palm detecting unit among Fig. 1;
Fig. 5 is the process flow diagram of a kind of embodiment of video image display control method of the present invention;
Fig. 6 is the process flow diagram that obtains the human region image among Fig. 5;
Fig. 7 is the process flow diagram that obtains difference image among Fig. 6;
Fig. 8 is the process flow diagram that the connectivity of region is analyzed among Fig. 6;
Fig. 9 is the process flow diagram of update background module among Fig. 7;
Figure 10 is the process flow diagram of gestures detection among Fig. 5;
Figure 11 is the process flow diagram that obtains of palm target area among Figure 10;
Figure 12 is the process flow diagram that palm is located and obtained among Figure 11;
Figure 13 is the process flow diagram of determining the palm type of sports among Figure 11;
Figure 14 a, Figure 14 b, Figure 14 c, Figure 14 d, Figure 14 e and Figure 14 f are corresponding to the location of Figure 12 palm and the schematic diagram of a kind of embodiment that obtains;
Figure 15 a, Figure 15 b, Figure 15 c, Figure 15 d, Figure 15 e, Figure 15 f, Figure 15 g, Figure 15 h and Figure 15 i are the schematic diagram of a kind of embodiment that activates the type of sports classification of palm among Figure 13;
Figure 16 is the schematic diagram of determining a kind of embodiment of control command among Fig. 6.
Embodiment
By reference to the accompanying drawings the present invention is described in further detail below by embodiment.
In recent years, computer vision technique has developed to such an extent that reach its maturity and has been used widely in a lot of fields, under this background, come thereby the hand shape and gesture of human body is identified the action behavior of understanding and explaining the people by computer vision, and then finish between humans and machines also become alternately possibility, the present invention namely is based on video image display control method and the video image display of this computer vision technique.
Please refer to Fig. 1, a kind of embodiment of a kind of video image display of the present invention, comprise: camera head 1, human body detection device 2, hand gesture detecting device 3, control command are determined device 4 and image display control apparatus 5, wherein camera head 1 links to each other with human body detection device 2, this human body detection device 2 links to each other with hand gesture detecting device 3, this hand gesture detecting device 3 determines that with control command device 4 links to each other, and control command determines that device 4 links to each other with image display control apparatus 5.Wherein, camera head 1 is used for the real-time scene image before the collection image display control apparatus 5, and sends to human body detection device 2; Human body detection device 2 is used for the real-time scene image that receives is carried out human detection, obtains the human region image, and sends to hand gesture detecting device 3; Hand gesture detecting device 3 is used for the human region image that receives is carried out gestures detection, and this gesture is sent to control command determines device 4; Control command determines that device 4 is used for determining corresponding control command according to the gesture that receives, and this control command is sent to image display control apparatus 5; Image display control apparatus 5 is used for according to this demonstration of control command control video image on display device.
Please refer to Fig. 2, among the another kind of embodiment of the present invention, this video image display also comprises: the context update device 6 that links to each other with human body detection device 2, be used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the human region image that detects, if then background model remains unchanged, otherwise update background module.
Please refer to Fig. 3 a, among a kind of embodiment of the present invention, when the gesture of hand gesture detecting device 3 detections comprises the hand shape of palm, hand gesture detecting device 3 comprises: palm detecting unit 31, hand-shaped characteristic extraction unit 32 and hand shape recognition unit 33, this palm detecting unit 31 links to each other with hand-shaped characteristic extraction unit 32, the human region image that is used for human body detection device 2 is obtained carries out the palm target detection, and obtains the palm target image, sends to hand-shaped characteristic extraction unit 32 again; Hand-shaped characteristic extraction unit 32 links to each other with hand shape recognition unit 33, is used for that the palm target image that receives is carried out hand-shaped characteristic and extracts, and send to hand shape recognition unit 33; Hand shape recognition unit 33 determines that with control command device 4 links to each other, hand shape sorter according to the hand shape of the palm that receives and in advance foundation carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, if effectively, then control command determines that device 4 determines the corresponding control command of this efficient database according to the gesture database of setting up in advance.5 of image display control apparatus switch corresponding video image according to this control command or the video image of current demonstration are operated, the video image of current demonstration can be the video image that switched without the user, also can be the video image after just having switched according to user's gesture.
Please refer to Fig. 3 b, among the another kind of embodiment of the present invention, when the gesture that detects when hand gesture detecting device 3 comprised the movement locus of the hand shape of palm and palm, this hand shape pick-up unit 3 comprised: palm detecting unit 31, hand-shaped characteristic extraction unit 32, hand shape recognition unit 33 and the palm tracking cell 34 that links to each other with hand shape recognition unit 33.When hand shape recognition unit 33 judges that the hand shape of palms is effective hand shape, then this palm is designated the activation palm, and sends to palm tracking cell 34 and control command is determined device 4; Palm tracking cell 34 determines that with control command device 4 links to each other, for detection of the movement locus of the activation palm that receives, and the type of sports of definite current activation palm; Type of sports and effective hand shape that control command is determined 4 these current activation palms of basis of device are determined corresponding control command in the gesture database of setting up in advance.5 of image display control apparatus switch corresponding video image according to this control command or the video image of current demonstration are operated.
Please refer to Fig. 4, among a kind of embodiment of the present invention, palm detecting unit 31 comprises Face Detection module 311, people's face detection module 312 and palm Target Acquisition module 313, wherein Face Detection module 311 links to each other with people's face detection module 312, the human region image that obtains according to the human body complexion feature detection, and extract people's face, palm and/or arm regions; People's face detection module 312 links to each other with palm Target Acquisition module 313, be used for from the zone that has obtained human face region being detected, and testing result sent to palm Target Acquisition module 313, palm Target Acquisition module 313 is deleted human face region according to testing result, and obtains the palm target area image.
Please refer to Fig. 4, when the gesture that detects when hand gesture detecting device 3 comprises the movement locus of the hand shape of palm and palm, this palm Target Acquisition module 313 comprises that hand identification submodule 3131 and coupled palm obtain submodule 3132, palm and/or arm regions that this palm area recognin module 3131 is used for from deleting human face region, judge whether this zone only comprises the zone of palm, in this way, then identify the palm target area image, otherwise this area image is identified as palm and arm regions image, and send it to palm and obtain submodule 3132, obtain submodule 3132 by palm and from this palm and arm regions image, obtain the palm target area image.
Among the another kind of embodiment of the present invention, palm detecting unit 31 also comprises the palm target correcting module 314 that links to each other with palm Target Acquisition module 313, be used for the palm target area image that palm Target Acquisition module 313 is obtained is carried out the connectivity of region analysis, thereby obtain complete palm target area image.
Based on above video image display, the present invention proposes a kind of video image display control method.Below in conjunction with the drawings and specific embodiments this method is described in detail.
Please refer to Fig. 5, a kind of video image display control method comprises step:
Real-time scene image before S1, the collection display device.
S2, this real-time scene image is carried out human detection, and obtain the human region image.
S3, in this human region image, detect gesture.
S4, determine the corresponding control command of this gesture.
S5, according to the demonstration of this control command control video image on display device.
In an embodiment of the present invention, when collecting a two field picture, also this image is carried out buffer memory, so also comprises after gathering the real-time scene image in the present embodiment: step S6 with the real-time scene image buffer storage that gathers in the frame data buffer zone.
In order can well to control view data, thereby guarantee the smoothness of data acquisition and processing (DAP), frame data buffer zone in the present embodiment has adopted the double buffering queue technology of video flowing, takes out buffer zone separately thereby deposit frame image data in buffer zone and data.
In an embodiment of the present invention, in order to obtain more accurate image, need to carry out pre-service to the real-time scene image that gathers, comprise step:
S7, the color space of the real-time scene image that gathers is transformed into HSV from RGB.
For the ease of the human detection among the step S2, again because the colour of skin is quite concentrated in the distribution of color space, but can be subject to throwing light on and the very large impact of ethnic group, affected by illumination intensity in order to reduce the colour of skin, therefore, in the present embodiment, the real-time scene image is carried out color space conversion to certain color space of brightness and chrominance separation, then abandon luminance component.
Because the HSV space is the tone (Hue, H) with color, saturation degree (Saturation, S) and brightness (Value, V) three elements represent, belong to non-linear color representation system.HSV color representation method is consistent to the perception of color with the people, and in the HSV space, the people is more even to the perception of color, therefore, the HSV space is suitable for the color space of human vision property, rgb space is converted to HSV after so that message structure is compacter, the independence of each component strengthens, and colouring information is lost few.Therefore, adopt the hsv color space in the present embodiment.
Certainly the color space model in the present embodiment can also be other color spaces, such as YCbCr etc.
Rgb space is as follows to the transformational relation in HSV space, establish R, G, B between [0,1]:
V=Max(R,G,B)
Figure BDA0000041550680000081
Figure BDA0000041550680000082
S8, the image that will carry out after the color space conversion have carried out denoising, adopt the mode of medium filtering that this image is carried out denoising in the present embodiment
Owing to having noise etc. in the real-time scene image that step S1 gathers, therefore, for the image that better obtains, needing this image is carried out denoising.
Please refer to Fig. 6, in an embodiment of the present invention, human detection among the step S2, and obtain the human region image and comprise step:
S21, with the current real-time scene picture frame that obtains and the subduction operation of carrying out Pixel-level according to the reference picture of background model gained, obtain difference image.
S22, this difference image is carried out binary conversion treatment, obtain the binaryzation difference image.
S23, this binaryzation difference image is carried out morphology process.
In some cases, when the direction of taking such as video camera is basically identical with the human motion direction, comprised some black holes and noise spot in the preliminary difference binary image that obtains, the difference binary image that therefore needs tentatively to obtain is done the morphology processing.
In an embodiment of the present invention, step S23 morphology is processed and is comprised: adopt the corrosion operation to remove this to noise spot isolated in the binaryzation difference image, adopt expansive working to fill hollow sectors in this binaryzation difference image.Wherein, the structural element of corrosion operation and expansive working is got length and width and is respectively 3 decussate texture element.
S24, will meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, thereby obtain connected region.
Owing to carrying out having comprised some scattered zone or pixels in the image after the binary conversion treatment, therefore, need to be communicated with processing by the image that the connectivity of region analysis will meet pre-defined rule.In the present embodiment, the predetermined rule that is communicated with adopts 8-to be communicated with rule among the step S24, and this pre-defined rule can also be that other are communicated with rule certainly, and for example 4-is communicated with rule.
S25, judge that whether the number of pixels summation is less than setting threshold in the area of each connected region, then this connected region is considered as the noise range in this way, and delete this connected region, the area image that all connected regions that then stay at last form is the human region image, exports this human region image.Wherein setting threshold can rule of thumb arrange.
Because in direct-detection gesture process, tend to exist noise in the palm area image of extraction, and these noises are very near palm, thereby impact is to the judgement of gesture.In order to obtain more accurate gesture, the present invention has adopted and has at first carried out human detection, detects gesture again, thus in carrying out the human detection process with noise remove, so that detected gesture is more accurate.
Because human body may be in constant motion, also changing with respect to its background image of scene image that gathers each time, in order to obtain more accurate background image, just background model need to be upgraded.
Therefore, in another embodiment of the invention, step S2 also comprises step:
S26, judge in the current real-time scene image that obtains whether a pixel belongs to the pixel in the human region that detects, and then background model remains unchanged in this way, otherwise update background module.
Please refer to Fig. 7, among a kind of embodiment of the present invention, step S21 comprises step:
S211, obtain pretreated image.
S212, judge whether the current background model is set up, execution in step S213 then in this way, otherwise execution in step S214.
S213, with the current pretreated two field picture f that obtains kThe pixel value of each pixel in (x, y) is and corresponding to the background reference image b according to the background model gained kThe pixel value of each pixel reduces operation in (x, y), obtains difference image D k(x, y) then has D k(x, y)=| f k(x, y)-b k(x, y) |.
S214, set up the Model B that a single Gaussian distribution of usefulness represents=[μ, δ for each pixel 2], wherein μ is average, δ 2Be variance.
S215, output difference partial image.
In an embodiment of the present invention, among the step S22 difference image being carried out binary conversion treatment is:
S221, set in advance an image segmentation threshold T=k δ, every pixel value and the predetermined threshold value of difference image compared, predetermined threshold value can rule of thumb arrange, or calculates according to existing adaptive algorithm.Threshold value T is made as 3 times of sizes of the standard deviation of current pixel point pixel value in the present embodiment.
S222, pixel value and this segmentation threshold T of each pixel in the difference image compared, and according to comparative result this difference image is cut apart, thereby obtain the binaryzation difference image
M k ( x , y ) = 1 foreground D k ( x , y ) > T 0 background otherwise .
The pixel value that adopts current pixel point in the present embodiment is greater than this threshold value T, and then its pixel value is set to 1; The pixel value of current pixel point is less than or equal to this threshold value T, and then its pixel value is set to 0, thereby difference image has been carried out binaryzation, namely obtains the binaryzation difference image.
Certainly, can pixel value be set to 0 greater than the pixel of threshold value in the present embodiment, pixel value is set to 1 less than or equal to the pixel of threshold value and also is fine.
Please refer to Fig. 8, among a kind of embodiment of the present invention, among the step S24 binaryzation difference image carried out the connected region analysis and comprise step:
S241, according to from top to bottom, order from left to right scans current binaryzation difference image.
S242, judge whether current pixel point is the foreground point, in this way, then it is labeled as a new ID, otherwise execution in step S241.
The foreground point here is the pixel that is changed by the caused pixel value of the appearance of human motion corresponding in the current real scene.
S243, judge whether the pixel on the 8-communication direction of this foreground point is the foreground point, in this way, then it is labeled as identical ID, and adds stacked Stack.
S244, judged above-mentioned 8 pixels after, check whether stack is empty, if not sky then stack top element is ejected, if sky then finishes scanning and execution in step S246.
S245, the 8-above the pixel that ejects continued are communicated with and judge, constantly repeat top process, until stack be sky, have obtained having the foreground area of identical ID.
S246, after whole image scanning finishes, just obtained all connected regions, and each connected region has unique sign ID.
Please refer to Fig. 9, the step S26 update background module in the present embodiment comprises step:
S261, obtain foreground mask, namely obtain pixel value and be 1 pixel.
S262, judge whether this pixel is the pixel that belongs in the human region that obtains among the step S26, in this way, execution in step S263 then, otherwise execution in step S264.
The parameter constant of the statistical model of S263, maintenance background pixel point.Establishing current frame image in the present embodiment is I i, α is learning rate, and μ is average, and δ is standard deviation, and its context update formula is:
μ i+1=μ i
δ i + 1 2 = δ i 2 .
S264, the parameter of the statistical model of background pixel point is upgraded the more new formula of then having powerful connections:
μ i+1=(1-α)μ i+αI i
δ i + 1 2 = ( 1 - α ) δ i 2 + α ( I i - μ i ) 2 ,
Wherein, learning rate α can be made as 0.002 in the present embodiment, can certainly be made as other values.
Please refer to Figure 10, in an embodiment of the present invention, when the gesture among the step S3 comprised the hand shape of palm, step S3 comprised:
S31, carry out the palm target detection to obtaining human region, and obtain the palm target area image.
S32, this palm target area image is carried out hand-shaped characteristic extract.
S33, according to the hand-shaped characteristic of this extraction and the hand shape sorter of setting up in advance carry out the identification of hand shape, judge whether the hand shape of this palm is effective hand shape, then execution in step S4.
Please refer to Figure 11, among a kind of embodiment of the present invention, carry out the palm target detection among the step S31 and obtain the palm target area image comprising step:
S311, the human region image that obtains is carried out Face Detection, obtain and comprise people's face, palm or arm regions image.
Because the tone of human body skin is distributed in certain scope, can people's face and arm palm portion be extracted from human region by features of skin colors.
Because the colour of skin is quite concentrated in the distribution of color space, but can be subject to throwing light on and the very large impact of ethnic group, affected by illumination intensity in order to reduce the colour of skin, therefore, the present embodiment has carried out the color of image space with scene image and has been converted to HSV in step S6, thereby with brightness and chrominance separation.Simultaneously, be the impact of avoiding brightness in the same camera lens changes and other cause brightness to change, thereby in the present embodiment, abandon luminance component when in step S311, carrying out Face Detection, only select the H component of image as detecting foundation.
Cut apart skin pixel according to colour of skin cluster on the H component again, namely make the threshold value in HSV space according to statistical study, and carry out cutting apart of area of skin color according to this threshold value, thereby people's face, palm and/or arm regions are distinguished.
S312, from the above-mentioned zone image, choose a zone.
S313, according to the faceform who sets up in advance people's face is carried out in this zone and detect, as detect people's face and then this zone is abandoned, and execution in step S314, otherwise export this palm and/or arm regions image, and execution in step S315.
S314, judge whether to be still waiting surveyed area, in this way, execution in step S313 then, otherwise end operation.
If S315 judges the length breadth ratio of this area image and is not more than 2, judge that then this area image is the palm target area image, and execution in step S317; Otherwise judge that this zone is palm and arm regions image, and execution in step S316.
S316, employing palm location algorithm position the palm in this palm and the arm regions, and obtain this palm area.
In order to obtain complete palm area image, among a kind of embodiment of the present invention, also comprise among the step S315:
When being judged to be the palm area image, execution in step S318 then: this palm area image is carried out the connectivity of region analysis, thereby obtain complete palm area image, again execution in step S317;
When being judged to be palm and arm regions image, execution in step S318 before execution in step S316 namely carries out the connectivity of region analysis to this palm and arm regions image, thereby obtains complete palm and arm regions image.
In the present embodiment, this connected component analysis adopts 8 to be communicated with rules and to be communicated with processing: judge on the Seed Points coordinate in the primitive frame image pixel and on every side the value of the H component of 8 neighbor pixels whether less than setting threshold, in this way, then be regarded as belonging to the same class pixel, join in the connected region, obtain complete palm and/or arm regions image.
Adopted in this example people's face to detect to delete human face region.Wherein people's face detects and comprises two kinds of methods:
One is based on the method for detecting human face of knowledge: by detecting the position of different face features, then locate people's face according to some knowledge rules, because always there is certain rule in the distribution of the local feature of people's face, always be symmetrically distributed on the face half part etc. of people such as eyes, so can utilize one group to describe rule that people's face local feature distributes and carry out that people's face detects, and from bottom to top two kinds detect strategy from top to bottom.
Two are based on the method for presentation: because people's face has unified tactic pattern, and the realization of sorter can adopt different strategies, such as the method that adopts neural network and traditional statistical method etc.Therefore, at first by study, set up the sorter that an energy is correctly identified people's face and non-face sample on the basis of a large amount of training sample sets, then detected image is carried out whole scan, whether the image window that scans with detection of classifier comprises people's face, if have, then provide the position at people's face place.
In an embodiment of the present invention, people's face detects the method that has adopted based on presentation, comprising: S313a, a large amount of facial image samples of off-line collection; S313b, extract the multidimensional characteristic vectors of people's face again, and adopt PCA method (Principal Component Analysis, principal component analysis (PCA)) dimensionality reduction; This proper vector that S313c, utilization are extracted is trained neural network and is obtained the face classification device; S313d, in the face classification device, this human region image is carried out people's face according to above-mentioned proper vector again and detect; S313e, be people's face as detecting, then with the human face region deletion, thereby obtain palm and/or arm regions image.
Please refer to Figure 12, among a kind of embodiment of the present invention, carry out the palm location among the step S316 and obtain comprising step:
S316a, employing Canny operator carry out rim detection to this palm and arm regions image, obtain marginal information, obtain region contour, shown in Figure 14 a.
S316b, this region contour is carried out minimum external ellipse fitting, obtains this external oval information, comprising: major axis, minor axis, with the angle angle of transverse axis, shown in Figure 14 b.
S316c, obtain the directional information of this region contour according to the major axis of this external ellipse with the angle angle of transverse axis, thus finally obtain wherein arm and the sensing of palm, shown in Figure 14 c.
S316d, by the image geometry space coordinate transformation image is carried out in this zone that has obtained to point to and correct, so that arm and palm be oriented to straight up, shown in Figure 14 d.
S316e, arm and palm area after correcting are carried out the palm detection and localization, and obtain the palm target area image.
Shown in Figure 14 e and Figure 14 f, adopt the palm location algorithm that palm is positioned in the present embodiment, be specially: the edge pixel of this palm and arm regions is carried out projection operation on the vertical direction, find palm place end; Again all pixels of this palm and arm regions are carried out projection operation on the vertical direction, and begin to seek peak point on projection coordinate's axle from palm place end; With the valley point that occurs behind this peak point cut-point as arm and palm; According to this cut-point this palm and arm regions are carried out cutting apart on the vertical direction, obtain palm portion thereby remove arm, namely obtain the palm target area image.
S317, output palm target area image, and execution in step S32 carries out the hand-shaped characteristic extraction to this palm target area image.
Please refer to Figure 10, in an embodiment of the present invention, when if the gesture among the step S3 comprises the hand shape of palm, after then carrying out the hand-shaped characteristic extraction, step S33 comprises: the hand shape sorter according to the hand-shaped characteristic that extracts and in advance foundation carries out the identification of hand shape, whether the hand shape of judging this palm is effective, such as execution in step S4 effectively then: determine control command corresponding to this effective hand shape according to the gesture database of setting up in advance, otherwise abandon this hand shape.
Please refer to Figure 13, in another embodiment of the invention, when if the gesture among the step S3 comprises the movement locus of the hand shape of palm and palm, after carrying out the hand-shaped characteristic extraction, step S33 also comprises: this palm is designated the activation palm, and follow the tracks of the movement locus of current activation palm, to determine the type of sports of current activation palm.
When the hand shape of judging current palm is effective, execution in step S4 then: in the gesture database of setting up in advance, determine corresponding control command according to the type of sports of current activation palm.
Last execution in step S5: switch corresponding video image or current video image is operated according to the control command of determining.
Among a kind of embodiment of the present invention, gesture is divided into static and motion, when be static gesture, then obtains corresponding control command according to effective hand shape; When gesture for motion then need determine first its type of sports, then according to the type of sports of palm and/or effectively hand shape obtain corresponding control command.Wherein, motion has comprised again upwards, has waited downwards, left or to the right.
Effective hand shape: N1, left the five fingers palm, right the five fingers palm are shown in Figure 15 c; N2, left the five fingers palm, right fist are shown in Figure 15 d; N3, left the five fingers palm, a right finger palm are shown in Figure 15 e; N4, the first from left refer to palm, right the five fingers palm, shown in Figure 15 f.
Type of sports: motion left comprises that M1, single the five fingers palm are moved to the left, shown in Figure 15 b; Move right and comprise that M2, single the five fingers palm move right, shown in Figure 15 a; Move right left and comprise that M3, left the five fingers palm are moved to the left, right the five fingers palm moves right, and shown in Figure 15 g, and M4, left the five fingers palm move right, and right the five fingers palm is moved to the left, shown in Figure 15 h; Be association of activity and inertia: NAM, left the five fingers palm transfixion, a right finger palm move, shown in Figure 15 i.
Certainly type of sports can also be other in the present embodiment.
As shown in figure 13, among a kind of embodiment of the present invention, wherein the foundation of hand shape sorter and training comprise: a large amount of hand shape image sample sets of off-line collection; Extract hand-shaped characteristic wherein; The hand-shaped characteristic that recycling obtains is trained the sorter that obtains hand shape to neural network.
In the present embodiment, each above-mentioned sample set is the image template of the different hand shape of representative; Above-mentioned hand-shaped characteristic comprises: hand shape profile, hand shape curvature, hand shape girth, hand shape area, hand shape convex-concave degree, the projection of hand shape edge-perpendicular, hand shape edge horizontal projection.Certainly the hand-shaped characteristic in the present embodiment can also be other feature.Neural network in the present embodiment has adopted the three-layer neural network model, can certainly use other neural network models.
Please refer to Figure 16, among a kind of embodiment of the present invention, step S4 determines that control command comprises:
The type of sports of the activation palm of S41, obtaining step S3 sign.
S42, activate the type of sports of palm according to this, in the hand shape and gesture database of setting up in advance, search corresponding gesture, if in this database, successfully find corresponding gesture, then obtain the order corresponding with this gesture, otherwise do not do any action.This order comprises operation that this gesture will be finished and the object of operation.
S43, judge that this operand is video cartoon file or picture file, if the video cartoon file, if execution in step S44 then is image file execution in step S45 then.
S44, understanding are also explained this gesture, and export corresponding control command, for example:
When if the gesture of current activation palm is M1, then it plays a upper video cartoon file corresponding to the control command in the gesture database for switching to, and exports corresponding control command;
When if the gesture of current activation palm is M2, then this gesture is interpreted as and plays next video cartoon file, and exports corresponding control command;
When if the gesture of current activation palm is N1, then this gesture is interpreted as and plays the current video animation file, and exports corresponding control command;
When if the gesture of current activation palm is N2, then this gesture is interpreted as to suspend and plays the current video animation file, and exports corresponding control command;
When if the gesture of current activation palm is N3, then this gesture is interpreted as fast-forward play current video animation file, and exports corresponding control command;
When if the gesture of current activation palm is N4, then this gesture is interpreted as the fast reverse play current video image, and exports corresponding control command.
S45, understanding are also explained this gesture, and export corresponding control command, for example:
When if the gesture of current activation palm is M1, then this gesture is interpreted as and shows a upper pictures, exports corresponding control signal;
When if the gesture of current activation palm is M2, then this gesture is interpreted as and shows next pictures, exports corresponding control command;
When if the gesture of current activation palm is M3, then this gesture is interpreted as the amplification picture, exports corresponding control command;
When if the gesture of current activation palm is M4, then this gesture is interpreted as and dwindles picture, exports corresponding control command;
When if the gesture of current activation palm is NAM, then this gesture is interpreted as mobile picture, exports corresponding control command.
By video image display control method of the present invention, the user only need to make corresponding gesture, comprise gesture static or motion, show with the video image of need selecting, perhaps current display video image is operated, so that realized between user and the video image display having improved the interactive efficiency between video image and the user initiatively alternately.
Above-mentioned a kind of video image display control method can be used for the demonstration of video ads picture or animation, also can be used for the demonstration of other picture or animation.
Above content is in conjunction with concrete embodiment further description made for the present invention, can not assert that implementation of the present invention is confined to these explanations.For the general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (7)

1. a video image display control method is characterized in that, comprises step:
Real-time scene image before A, the collection display device;
B, thereby the current real-time scene picture frame that obtains is detected the human region image with comparing according to the reference picture of background model gained;
C, detect gesture in described human region image, described gesture comprises the hand shape of palm and the movement locus of palm, and described human region image is carried out the palm target detection, and obtains the palm target area image; Described palm target area image is carried out hand-shaped characteristic to be extracted; Hand shape sorter according to the hand-shaped characteristic of this palm that extracts and in advance foundation carries out the identification of hand shape, judges whether the hand shape of this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm; Detect the movement locus of current activation palm, determine the type of sports of current activation palm;
D, in the gesture database of setting up in advance, determine corresponding control command according to the type of sports of this effective hand shape and current activation palm;
E, switch corresponding video image or the video image of current demonstration is operated according to described control command.
2. the method for claim 1 is characterized in that, the step of described human body area image comprises:
The current real-time scene picture frame that obtains and the reference picture according to the background model gained are carried out the subduction operation of Pixel-level, obtain difference image;
Described difference image is carried out binary conversion treatment, obtain the binaryzation difference image;
Described binaryzation difference image is carried out morphology to be processed;
To meet the predetermined binaryzation difference image that is communicated with rule and be communicated with processing, obtain connected region;
Judge whether each connected region is the noise range, if then deletion;
The area image that will be comprised of all connected regions that stay at last is as the human region image, and exports described human region image.
3. method as claimed in claim 2, it is characterized in that, also comprise step: judge whether each pixel in the current real-time scene picture frame that obtains belongs to the pixel in the human region that detects, if then background model remains unchanged, otherwise update background module.
4. the method for claim 1 is characterized in that, the step that described human region image is carried out the palm target detection comprises:
Described human region image section is carried out Face Detection, obtain and comprise people's face, arm or palm area image;
Obtain arm and palm or palm area image according to the Face Detection model of setting up in advance;
In arm and palm or palm area image, detect palm.
5. method as claimed in claim 4 is characterized in that, the described step that detects palm in arm and palm or palm area image comprises:
Whether judge the length breadth ratio of described arm and palm or palm area image greater than 2, if judge that then this zone is arm and palm area image, otherwise be the palm area image;
When being judged to be arm and palm area image, described arm and palm area image are carried out rim detection, obtain marginal information, obtain region contour;
Described region contour is carried out minimum external ellipse fitting, obtain the information of described external ellipse;
According to the information of described external ellipse, obtain the directional information of described region contour, thereby finally obtain the sensing of described arm and palm;
Described arm regions image and the palm area image that obtains sensing carried out the image rectification, so that arm and palm are oriented to straight up;
Described arm and palm area image after rectification carry out the palm detection and localization, obtain the palm target area image.
6. video image display is characterized in that comprising:
Camera head is used for gathering the front real-time scene image of display device;
Human body detection device will the current real-time scene picture frame that obtains and compare according to the reference picture of background model gained and to detect the human region image thereby be used for;
Hand gesture detecting device is used for detecting gesture at described human region image, and described gesture comprises the hand shape of palm and the movement locus of palm, and described hand gesture detecting device comprises:
The palm detecting unit is used for described human region image is carried out the palm target detection, and obtains the palm target area image;
The hand-shaped characteristic extraction unit carries out hand-shaped characteristic to described palm target area image and extracts;
Hand shape recognition unit, hand shape sorter according to the hand-shaped characteristic of this palm that extracts and in advance foundation carries out the identification of hand shape, whether the hand shape of judging this palm is effective hand shape, when the hand shape of judging this palm is effective hand shape, this palm is designated current activation palm;
The palm tracking cell for detection of the movement locus of current activation palm, is determined the type of sports of current activation palm;
Control command is determined device, at the gesture database of setting up in advance, determines corresponding control command according to the type of sports of this effective hand shape and current activation palm;
Image display control apparatus is used for switching corresponding video image or the video image of current demonstration being operated according to described control command.
7. video image display as claimed in claim 6, characterized by further comprising the context update device, it is used for judging whether current each pixel of real-time scene picture frame that obtains belongs to the pixel in the human region that detects, if then background model remains unchanged, otherwise update background module.
CN 201010612804 2010-09-28 2010-12-29 Video image display control method and video image display device Expired - Fee Related CN102081918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010612804 CN102081918B (en) 2010-09-28 2010-12-29 Video image display control method and video image display device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201010295067 2010-09-28
CN201010295067.4 2010-09-28
CN 201010612804 CN102081918B (en) 2010-09-28 2010-12-29 Video image display control method and video image display device

Publications (2)

Publication Number Publication Date
CN102081918A CN102081918A (en) 2011-06-01
CN102081918B true CN102081918B (en) 2013-02-20

Family

ID=44087844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010612804 Expired - Fee Related CN102081918B (en) 2010-09-28 2010-12-29 Video image display control method and video image display device

Country Status (1)

Country Link
CN (1) CN102081918B (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5845002B2 (en) 2011-06-07 2016-01-20 ソニー株式会社 Image processing apparatus and method, and program
CN102436301B (en) * 2011-08-20 2015-04-15 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN103034322A (en) * 2011-09-30 2013-04-10 德信互动科技(北京)有限公司 Man-machine interaction system and man-machine interaction method
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN102509079A (en) * 2011-11-04 2012-06-20 康佳集团股份有限公司 Real-time gesture tracking method and tracking system
CN103092332A (en) * 2011-11-08 2013-05-08 苏州中茵泰格科技有限公司 Digital image interactive method and system of television
CN102509088B (en) * 2011-11-28 2014-01-08 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system
CN103576990A (en) * 2012-07-20 2014-02-12 中国航天科工集团第三研究院第八三五八研究所 Optical touch method based on single Gaussian model
CN102831407B (en) * 2012-08-22 2014-10-29 中科宇博(北京)文化有限公司 Method for realizing vision identification system of biomimetic mechanical dinosaur
CN102930270A (en) * 2012-09-19 2013-02-13 东莞中山大学研究院 Method and system for identifying hands based on complexion detection and background elimination
KR101490908B1 (en) * 2012-12-05 2015-02-06 현대자동차 주식회사 System and method for providing a user interface using hand shape trace recognition in a vehicle
US8761448B1 (en) 2012-12-13 2014-06-24 Intel Corporation Gesture pre-processing of video stream using a markered region
CN103049084B (en) * 2012-12-18 2016-01-27 深圳国微技术有限公司 A kind of electronic equipment and method thereof that can adjust display direction according to face direction
CN103176667A (en) * 2013-02-27 2013-06-26 广东工业大学 Projection screen touch terminal device based on Android system
US9292103B2 (en) * 2013-03-13 2016-03-22 Intel Corporation Gesture pre-processing of video stream using skintone detection
CN103246347A (en) * 2013-04-02 2013-08-14 百度在线网络技术(北京)有限公司 Control method, device and terminal
CN103428551A (en) * 2013-08-24 2013-12-04 渭南高新区金石为开咨询有限公司 Gesture remote control system
CN103442177A (en) * 2013-08-30 2013-12-11 程治永 PTZ video camera control system and method based on gesture identification
CN103474010A (en) * 2013-09-22 2013-12-25 广州中国科学院软件应用技术研究所 Video analysis-based intelligent playing method and device of outdoor advertisement
JP6307852B2 (en) * 2013-11-26 2018-04-11 セイコーエプソン株式会社 Image display device and method for controlling image display device
TW201543268A (en) * 2014-01-07 2015-11-16 Thomson Licensing System and method for controlling playback of media using gestures
CN103885587A (en) * 2014-02-21 2014-06-25 联想(北京)有限公司 Information processing method and electronic equipment
CN104809387B (en) * 2015-03-12 2017-08-29 山东大学 Contactless unlocking method and device based on video image gesture identification
CN105095882B (en) * 2015-08-24 2019-03-19 珠海格力电器股份有限公司 The recognition methods of gesture identification and device
CN106886275B (en) * 2015-12-15 2020-03-20 比亚迪股份有限公司 Control method and device of vehicle-mounted terminal and vehicle
CN107025420A (en) * 2016-01-29 2017-08-08 中兴通讯股份有限公司 The method and apparatus of Human bodys' response in video
CN105825193A (en) * 2016-03-25 2016-08-03 乐视控股(北京)有限公司 Method and device for position location of center of palm, gesture recognition device and intelligent terminals
CN105930811B (en) * 2016-04-26 2020-03-10 济南梦田商贸有限责任公司 Palm texture feature detection method based on image processing
CN106022211B (en) * 2016-05-04 2019-06-28 北京航空航天大学 A method of utilizing gesture control multimedia equipment
CN106197437A (en) * 2016-07-01 2016-12-07 蔡雄 A kind of Vehicular guidance system possessing Road Detection function
CN106227230A (en) * 2016-07-09 2016-12-14 东莞市华睿电子科技有限公司 A kind of unmanned aerial vehicle (UAV) control method
CN108230328B (en) * 2016-12-22 2021-10-22 新沂阿凡达智能科技有限公司 Method and device for acquiring target object and robot
CN107390573B (en) * 2017-06-28 2020-05-29 长安大学 Intelligent wheelchair system based on gesture control and control method
CN108509853B (en) * 2018-03-05 2021-11-05 西南民族大学 Gesture recognition method based on camera visual information
CN108647564A (en) * 2018-03-28 2018-10-12 安徽工程大学 A kind of gesture recognition system and method based on casement window device
CN113032605B (en) * 2019-12-25 2023-08-18 中移(成都)信息通信科技有限公司 Information display method, device, equipment and computer storage medium
CN111580652B (en) * 2020-05-06 2024-01-16 Oppo广东移动通信有限公司 Video playing control method and device, augmented reality equipment and storage medium
CN112016440B (en) * 2020-08-26 2024-02-20 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN114153308B (en) * 2020-09-08 2023-11-21 阿里巴巴集团控股有限公司 Gesture control method, gesture control device, electronic equipment and computer readable medium
CN113221892A (en) * 2021-05-12 2021-08-06 佛山育脉科技有限公司 Palm image determination method and device and computer readable storage medium
CN113807328B (en) * 2021-11-18 2022-03-18 济南和普威视光电技术有限公司 Target detection method, device and medium based on algorithm fusion
CN116030411B (en) * 2022-12-28 2023-08-18 宁波星巡智能科技有限公司 Human privacy shielding method, device and equipment based on gesture recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276572A (en) * 1999-06-08 2000-12-13 松下电器产业株式会社 Hand shape and gesture identifying device, identifying method and medium for recording program contg. said method
CN1860429A (en) * 2003-09-30 2006-11-08 皇家飞利浦电子股份有限公司 Gesture to define location, size, and/or content of content window on a display
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof
CN101605399A (en) * 2008-06-13 2009-12-16 英华达(上海)电子有限公司 A kind of portable terminal and method that realizes Sign Language Recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1276572A (en) * 1999-06-08 2000-12-13 松下电器产业株式会社 Hand shape and gesture identifying device, identifying method and medium for recording program contg. said method
CN1860429A (en) * 2003-09-30 2006-11-08 皇家飞利浦电子股份有限公司 Gesture to define location, size, and/or content of content window on a display
CN101605399A (en) * 2008-06-13 2009-12-16 英华达(上海)电子有限公司 A kind of portable terminal and method that realizes Sign Language Recognition
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡文娟.手势驱动编钟演奏技术的研究与***实现.《手势驱动编钟演奏技术的研究与***实现》.2007, *

Also Published As

Publication number Publication date
CN102081918A (en) 2011-06-01

Similar Documents

Publication Publication Date Title
CN102081918B (en) Video image display control method and video image display device
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
CN102270348B (en) Method for tracking deformable hand gesture based on video streaming
US5774591A (en) Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
CN102298709A (en) Energy-saving intelligent identification digital signage fused with multiple characteristics in complicated environment
KR100612858B1 (en) Method and apparatus for tracking human using robot
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN110221699B (en) Eye movement behavior identification method of front-facing camera video source
CN108197534A (en) A kind of head part's attitude detecting method, electronic equipment and storage medium
CN102402680A (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN102831439A (en) Gesture tracking method and gesture tracking system
CN111898407B (en) Human-computer interaction operating system based on human face action recognition
CN101470809A (en) Moving object detection method based on expansion mixed gauss model
CN110235169A (en) Evaluation system of making up and its method of operating
CN104281839A (en) Body posture identification method and device
CN105912126B (en) A kind of gesture motion is mapped to the adaptive adjusting gain method at interface
CN108983980A (en) A kind of mobile robot basic exercise gestural control method
CN109558855B (en) A kind of space gesture recognition methods combined based on palm contour feature with stencil matching method
US20090033622A1 (en) Smartscope/smartshelf
CN103793056A (en) Mid-air gesture roaming control method based on distance vector
CN110032932A (en) A kind of human posture recognition method based on video processing and decision tree given threshold
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
CN102073878A (en) Non-wearable finger pointing gesture visual identification method
CN103020631A (en) Human movement identification method based on star model
Manresa-Yee et al. Towards hands-free interfaces based on real-time robust facial gesture recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHENZHEN RUIGONG TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SHENZHEN GRADUATE SCHOOL OF PEKING UNIVERSITY

Effective date: 20150624

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150624

Address after: 518000 Guangdong city of Shenzhen province Nanshan District high in the four No. 31 EVOC technology building 17B1

Patentee after: Shenzhen Rui Technology Co., Ltd.

Address before: 518055 Guangdong city in Shenzhen Province, Nanshan District City Xili Shenzhen University North Campus

Patentee before: Shenzhen Graduate School of Peking University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130220

Termination date: 20171229

CF01 Termination of patent right due to non-payment of annual fee