CN103218826A - Projectile detecting, three-dimensional positioning and trajectory predicting method based on Kinect - Google Patents

Projectile detecting, three-dimensional positioning and trajectory predicting method based on Kinect Download PDF

Info

Publication number
CN103218826A
CN103218826A CN2013100877803A CN201310087780A CN103218826A CN 103218826 A CN103218826 A CN 103218826A CN 2013100877803 A CN2013100877803 A CN 2013100877803A CN 201310087780 A CN201310087780 A CN 201310087780A CN 103218826 A CN103218826 A CN 103218826A
Authority
CN
China
Prior art keywords
kinect
projectile
depth
pixel
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100877803A
Other languages
Chinese (zh)
Other versions
CN103218826B (en
Inventor
陶熠昆
王聪颖
王宏涛
周连杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG SUPCON RESEARCH Co Ltd
Zhejiang Guozi Robot Technology Co Ltd
Original Assignee
ZHEJIANG SUPCON RESEARCH Co Ltd
Zhejiang Guozi Robot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG SUPCON RESEARCH Co Ltd, Zhejiang Guozi Robot Technology Co Ltd filed Critical ZHEJIANG SUPCON RESEARCH Co Ltd
Priority to CN201310087780.3A priority Critical patent/CN103218826B/en
Publication of CN103218826A publication Critical patent/CN103218826A/en
Application granted granted Critical
Publication of CN103218826B publication Critical patent/CN103218826B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A projectile detecting, three-dimensional positioning and trajectory predicting method based on a Kinect includes the following steps: S1, carrying out depth background modeling, wherein the depth background modeling includes: S101, enabling all depth values of background pixel to be quantized and standardized to be within the range of 0-255, and S102, utilizing a Boolean type array to store all the depth values of each pixel point of the background pixel within a period of time x, acquiring a data area to serve as a background module; S2, adopting an API function of the Kinect to acquire depth information of a projectile image; S3, carrying out foreground extraction on the depth information of the projectile image to acquire a foreground image; S4, dividing the projectile image into a plurality of connected components; S5, judging if the connected components are projectiles or not, and calculating three-dimensional coordinates of the projectiles according to a calibration module of a depth camera of the Kinect; and S6, enabling air resistance to be as one of state quantities of filtering and carrying out the filtering, and carrying out trajectory predicting on the projectiles.

Description

Projectile detection, three-dimensional localization and trajectory predictions method based on Kinect
Technical field
The present invention relates to the detection and the trajectory predictions field of projectile, be particularly related to a kind of projectile detection, three-dimensional localization and trajectory predictions method based on Kinect, be applied to comprise ball game (table tennis, the shuttlecock etc.) data analysis of spheroid track, the robot system that some carries out some action at the projectile running orbit, and other use the field of this technology.
Background technology
It is a lot of to realize that projectile detects with the scheme of trajectory predictions, the method for present more employing stereoscopic vision.Because influenced by illumination variation, environmental change bigger for vision system, though can conform the variation with illumination of a lot of technology is arranged, but still be a very challenging problem.And binocular vision system is had relatively high expectations to the processor computing power, and algorithm is very complicated, and cost is also higher.
Kinect is a body sense equipment that MS releases, and generally is used to develop the body sense interactive game of moving on the XBox360 game host.Kinect is from the technological essence RGB-D sensor of saying so, the algorithm that Microsoft provides relevant human detection and each joint position of human body to detect simultaneously.
Summary of the invention
The present invention is directed to the prior art above shortcomings, a kind of projectile detection based on Kinect, three-dimensional localization and trajectory predictions method are provided, three-dimensional coordinate with the perception projectile, compare traditional stereoscopic vision method, its reliability under the change photoenvironment is stronger, and need not to carry out stereoscopic camera and demarcate and algorithm for stereo matching, whole proposal is simple and stable, and cost is lower.
The present invention is achieved through the following technical solutions:
A kind of projectile detection based on Kinect, three-dimensional localization and trajectory predictions method may further comprise the steps:
S1, degree of depth background modeling comprise:
S101, all depth values of background pixel are quantized and be normalized within the scope of 0-255;
S102, adopt a Boolean type array to store this pixel all depth values in a period of time x, obtain a data area, as a setting model each pixel of background pixel;
The api function of S2, employing Kinect obtains the depth information of image;
S3, the depth information of projectile image is carried out foreground extraction, obtain foreground image;
S4, foreground image is divided into some independently connected components;
S5, judge whether connected component is projectile, and the peg model of the degree of depth camera by Kinect calculates the three-dimensional coordinate of projectile;
S6, air resistance is carried out filtering as one of quantity of state of filtering, projectile is carried out trajectory predictions.
Preferable, the size of Boolean type array is 255, and the data area is a width * highly * 255.
Preferable, foreground extraction comprises: establish a pixel coordinate (x, y) value that depth value is quantized to after the 0-255 is a, judge the pixel (x of pixel coordinate correspondence in background model, y) whether there is " true " item in a-3 to the a+3 item in the Boolean type array, be that then this pixel is a background, otherwise be prospect.
Preferable, step S4 adopts the method for seed filling that foreground image is divided into some independently connected components.
Preferable, filtering is to adopt the Kalman filtering mode.
Preferable, degree of depth background modeling is to adopt the codebook background modeling.
Preferable, x is 10 minutes.
Description of drawings
Fig. 1 is the process flow diagram of a kind of projectile detection based on Kinect of the present invention, three-dimensional localization and trajectory predictions method.
Fig. 2 is a preferred embodiment of a kind of projectile detection based on Kinect of the present invention, three-dimensional localization and trajectory predictions method.
Embodiment
Below in conjunction with embodiment the present invention is elaborated, present embodiment has provided detailed embodiment being to implement under the prerequisite with the technical solution of the present invention, but protection scope of the present invention is not limited to following embodiment.
Please refer to Fig. 1, process flow diagram of the present invention, realize according to following steps:
1, degree of depth background modeling
Consider that the environment that detects projectile may be complicated and relatively-stationary environment in background, therefore be necessary background is carried out modeling that the effect of background modeling is in order to distinguish background and prospect better in input picture.Consider that background may not be unalterable, the present invention adopts a kind of codebook to describe interested state in the background.Codebook is one of conventional means of background modeling, and the present invention does not do at this and gives unnecessary details and limit.Specific practice is as follows:
All possible depth value is quantized and be normalized within the scope of 0-255;
To each pixel in the image, gather a period of time (representative value is 10 minutes) depth value.With a size is that 255 Boolean type array is stored this pixel all collected depth values within these 10 minutes;
All pixels in the image are carried out above operation, obtain the data area of (picture traverse * picture altitude * 255), as final background model.
2 depth maps obtain
The api function that adopts Kinect official to provide can directly obtain picture depth information.
3 foreground extraction
Background model is applied in the input picture, then can extracts foreground model, also be about to background parts and from image, weed out.Extracting method is as follows: establish pixel coordinate (x, y) value that depth value is quantized to after the 0-255 is a, then judge respective pixel (x in the background model, y) a-3 item to the a+3 item in the Boolean type array is in totally seven Booleans, whether be the item of " true ", if exist, think that then this pixel is a background, otherwise this pixel be a prospect.
4 image segmentation
The effect of image segmentation is that the foreground image that will extract is divided into the independent component that is communicated with.The present invention adopts the method for seed filling, and foreground segmentation is become each connected component independent of each other.
The identification of 5 projectiles
Projectile identification is that each connected component after the image segmentation is differentiated, and differentiates it and whether belongs to projectile.The foundation of judging comprises: whether the motion conditions of the size dimension of connected region, the shape facility of connected region, connected region meets general projectile motion rule, according to above criterion, to discern projectile, and the peg model by Kinect degree of depth camera, calculate the three-dimensional coordinate of projectile.
6 trajectory predictions based on Kalman filtering
The present invention adopts the mode of Kalman filtering, and projectile is carried out trajectory predictions.It is to be noted, consider the air resistance model of projectile and the uncertainty of coefficient, the present invention with the coefficient of air resistance of projectile as Kalman filtering (Kalman filtering, a kind of existing filtering technique, the present invention does not do at this and gives unnecessary details and limit) quantity of state estimate together, make the present invention can adapt to various dissimilar projectiles.
Please refer to Fig. 2, understand the present invention, below provide the preferred embodiment of an a kind of projectile detection of the present invention, three-dimensional localization and trajectory predictions method, but the present invention is not as limit based on Kinect for the ease of the technician.
Be predicted as example with table tennis track, required comprising: table tennis table, Kinect equipment is (in order more fully to cover billiard table, two Kinect equipment have been adopted at this, the present invention does not limit at this), signal handling equipment connects above-mentioned two Kinect equipment, is used for signal Processing and trajectory predictions.
(1) major function of this embodiment comprises:
1, table tennis track reappears and velocity-measuring system: the three-dimensional track of system's real time record table tennis, and can be when playing slow motion playback table tennis track sequence and real-time speed, reach the purpose of the multi-faceted reproduction of information for the game;
2, table tennis automatic score system:, automatic score is carried out in match by judgement to track;
3, table tennis judge backup system:, adopt this system to judge accurately for the situation of the bad differentiation of some human eyes.
(2) the use flow process of this embodiment is as follows:
1, background modeling: arrange after the billiard table, between the non-match-period, background is carried out modeling;
2, empty resistance parameter identification: allow the sportsman carry out training and competition, will learn the sky resistance parameter of table tennis in system during the games;
3, the actual use in the match.
In addition, according to the foregoing description, all right expanded application of the present invention promptly adds robot in the above-described embodiments and replaces the sportsman in ping-pong robot, robot carries out trajectory planning to mechanical arm and finishes the return of serve function according to the trajectory predictions of Kinect equipment to table tennis.A robot can be used for man-machine air exercise, and two robots can be used for fighting each other between the robot, can be applicable to science and technology center's exhibition project or other field.
Major advantage of the present invention has 2 points:
1) reliability advantage: for vision system, Kinect utilizes structured light technique, for illumination variation, environmental change very high robustness is arranged.And stereo visual system is by the indirect mode of resolving compute depth information, and Kinect can directly obtain depth information, has simplified calculation cost greatly.
2) the mutual product of civilian body sense that cost advantage: Kinect releases as Microsoft, its cost is very low.Adapt to certain illumination and environmental change if adopt the stereoscopic vision scheme and need, then the cost of the cost of camera and processor all can far surpass scheme set forth in the present invention.
More than disclosed only be the application's a specific embodiment, but the application is not limited thereto, any those skilled in the art can think variation, all should drop in the application's the protection domain.

Claims (7)

1. the projectile detection based on Kinect, three-dimensional localization and trajectory predictions method is characterized in that, may further comprise the steps:
S1, degree of depth background modeling comprise:
S101, all depth values of background pixel are quantized and be normalized within the scope of 0-255;
S102, adopt a Boolean type array to store this pixel all depth values in a period of time x, obtain a data area, as a setting model each pixel of background pixel;
The api function of S2, employing Kinect obtains the depth information of image;
S3, the depth information of described projectile image is carried out foreground extraction, obtain foreground image;
S4, foreground image is divided into some independently connected components;
S5, judge whether described connected component is projectile, and the peg model of the degree of depth camera by Kinect calculates the three-dimensional coordinate of described projectile;
S6, air resistance is carried out filtering as one of quantity of state of filtering, projectile is carried out trajectory predictions.
2. a kind of projectile detection based on Kinect as claimed in claim 1, three-dimensional localization and trajectory predictions method is characterized in that the size of described Boolean type array is 255, and described data area is a width * highly * 255.
3. a kind of projectile detection as claimed in claim 1, three-dimensional localization and trajectory predictions method based on Kinect, it is characterized in that, described foreground extraction comprises: establish a pixel coordinate (x, y) value that depth value is quantized to after the 0-255 is a, judge the pixel (x of described pixel coordinate correspondence in background model, y) whether there is " true " item in a-3 to the a+3 item in the Boolean type array, is that then this pixel is a background, otherwise is prospect.
4. a kind of projectile detection based on Kinect as claimed in claim 1, three-dimensional localization and trajectory predictions method is characterized in that, described step S4 adopts the method for seed filling that foreground image is divided into some independently connected components.
5. a kind of projectile detection based on Kinect as claimed in claim 1, three-dimensional localization and trajectory predictions method is characterized in that, described filtering is to adopt the Kalman filtering mode.
6. a kind of projectile detection based on Kinect as claimed in claim 1, three-dimensional localization and trajectory predictions method is characterized in that, described degree of depth background modeling is to adopt the codebook background modeling.
7. a kind of projectile detection based on Kinect as claimed in claim 1, three-dimensional localization and trajectory predictions method is characterized in that described x is 10 minutes.
CN201310087780.3A 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method Expired - Fee Related CN103218826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310087780.3A CN103218826B (en) 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310087780.3A CN103218826B (en) 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method

Publications (2)

Publication Number Publication Date
CN103218826A true CN103218826A (en) 2013-07-24
CN103218826B CN103218826B (en) 2016-08-10

Family

ID=48816569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310087780.3A Expired - Fee Related CN103218826B (en) 2013-03-19 2013-03-19 Projectile based on Kinect detection, three-dimensional localization and trajectory predictions method

Country Status (1)

Country Link
CN (1) CN103218826B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886618A (en) * 2014-03-12 2014-06-25 新奥特(北京)视频技术有限公司 Ball detection method and device
CN105319991A (en) * 2015-11-25 2016-02-10 哈尔滨工业大学 Kinect visual information-based robot environment identification and operation control method
CN110553628A (en) * 2019-08-28 2019-12-10 华南理工大学 Depth camera-based flying object capturing method
CN110688965A (en) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040090523A1 (en) * 2001-06-27 2004-05-13 Tetsujiro Kondo Image processing apparatus and method and image pickup apparatus
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040090523A1 (en) * 2001-06-27 2004-05-13 Tetsujiro Kondo Image processing apparatus and method and image pickup apparatus
CN102681661A (en) * 2011-01-31 2012-09-19 微软公司 Using a three-dimensional environment model in gameplay

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
K.K.BISWAS等: "Gesture Recognition using Microsoft Kinect", 《PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON AUTOMATION,ROBOTICS AND APPLICATIONS》, 8 December 2011 (2011-12-08), pages 100 - 103 *
张毅等: "基于Kinect深度图像信息的手势轨迹识别及应用", 《计算机应用研究》, vol. 29, no. 9, 15 September 2012 (2012-09-15), pages 2547 - 3550 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886618A (en) * 2014-03-12 2014-06-25 新奥特(北京)视频技术有限公司 Ball detection method and device
CN105319991A (en) * 2015-11-25 2016-02-10 哈尔滨工业大学 Kinect visual information-based robot environment identification and operation control method
CN110553628A (en) * 2019-08-28 2019-12-10 华南理工大学 Depth camera-based flying object capturing method
CN110688965A (en) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN110688965B (en) * 2019-09-30 2023-07-21 北京航空航天大学青岛研究院 IPT simulation training gesture recognition method based on binocular vision

Also Published As

Publication number Publication date
CN103218826B (en) 2016-08-10

Similar Documents

Publication Publication Date Title
CA2830499C (en) Virtual golf simulation apparatus and sensing device and method used for the same
Bloom et al. G3D: A gaming action dataset and real time action recognition evaluation framework
US9162132B2 (en) Virtual golf simulation apparatus and sensing device and method used for the same
US9514379B2 (en) Sensing device and method used for virtual golf simulation apparatus
US20170193710A1 (en) System and method for generating a mixed reality environment
CN107543530B (en) Method, system, and non-transitory computer-readable recording medium for measuring rotation of ball
KR101898782B1 (en) Apparatus for tracking object
CN106796453A (en) Projecting apparatus is driven to generate the experience of communal space augmented reality
CN105074776A (en) In situ creation of planar natural feature targets
CN107909889B (en) Gobang man-machine playing experiment teaching system based on visual guidance
CN103218826A (en) Projectile detecting, three-dimensional positioning and trajectory predicting method based on Kinect
CN102944180B (en) Table tennis ball tossing height detecting system and method based on image processing
Moshayedi et al. Kinect based virtual referee for table tennis game: TTV (Table Tennis Var System)
EP0847201A1 (en) Real time tracking system for moving bodies on a sports field
US9333412B2 (en) Virtual golf simulation apparatus and method and sensing device and method used for the same
CN110348370A (en) A kind of augmented reality system and method for human action identification
CN110910489B (en) Monocular vision-based intelligent court sports information acquisition system and method
US20230285832A1 (en) Automatic ball machine apparatus utilizing player identification and player tracking
Winarno et al. Object detection for KRSBI robot soccer using PeleeNet on omnidirectional camera
CN109917907B (en) Card-based dynamic storyboard interaction method
Uchiyama et al. AR display of visual aids for supporting pool games by online markerless tracking
CN109200575A (en) The method and system for reinforcing the movement experience of user scene of view-based access control model identification
Hsu et al. Computer-assisted billiard self-training using intelligent glasses
Reis Robust vision algorithms for quadruped soccer robots
CN111784749A (en) Space positioning and motion analysis system based on binocular vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160810

Termination date: 20170319

CF01 Termination of patent right due to non-payment of annual fee