CN110648362A - Binocular stereo vision badminton positioning identification and posture calculation method - Google Patents

Binocular stereo vision badminton positioning identification and posture calculation method Download PDF

Info

Publication number
CN110648362A
CN110648362A CN201910859889.1A CN201910859889A CN110648362A CN 110648362 A CN110648362 A CN 110648362A CN 201910859889 A CN201910859889 A CN 201910859889A CN 110648362 A CN110648362 A CN 110648362A
Authority
CN
China
Prior art keywords
badminton
shuttlecocks
rcnn
fast
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910859889.1A
Other languages
Chinese (zh)
Other versions
CN110648362B (en
Inventor
刘英杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shangqiu Normal University
Original Assignee
Shangqiu Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shangqiu Normal University filed Critical Shangqiu Normal University
Priority to CN201910859889.1A priority Critical patent/CN110648362B/en
Publication of CN110648362A publication Critical patent/CN110648362A/en
Application granted granted Critical
Publication of CN110648362B publication Critical patent/CN110648362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a binocular stereo vision badminton positioning identification and posture calculation method, which comprises the steps of collecting images of different postures of a badminton as a sample data set; constructing a deep neural network based on a fast-RCNN algorithm; training the deep neural network based on a sample data set to obtain a training model of fast-RCNN; identifying the shuttlecocks by using a Faster-RCNN training model and determining the positions of the shuttlecocks in the images; performing three-dimensional reconstruction on the identified shuttlecocks by using binocular stereoscopic vision, and determining the spatial positions of the shuttlecocks; the method has the advantages that binaryzation and filtering processing are carried out on the badminton image by adopting a Canny operator, clear edges are extracted, and real-time attitude angles of the badminton are calculated.

Description

Binocular stereo vision badminton positioning identification and posture calculation method
Technical Field
The invention relates to the technical field of computer vision, in particular to a binocular stereoscopic badminton positioning identification and posture calculation method.
Background
In recent years, along with the development of national fitness activities, the badminton which is suitable for all ages is favored by a plurality of people. During badminton training, particularly training, a large number of badmintons are scattered on the ground randomly, and the traditional method of manually picking up the badmintons consumes a large amount of manpower and has low efficiency; the application of automated means to the field of shuttlecocks has been a relevant study. However, in the field of automatic shuttlecock picking, the shuttlecock picking device is still in a blank state at present. The badminton is fully automatically acquired in an intelligent mode, so that the human resource consumption can be greatly reduced, the training efficiency is improved, and the badminton training device has important application value.
Shuttlecock automatic picking needs to solve the problems of recognition, position calculation, attitude estimation and the like. The accurate identification of the target is the premise of automatic grabbing, and unlike the football and the badminton, the outline of the badminton is changed all the time, the white color is smaller than the background difference, and the traditional outline and color-based method is difficult to accurately identify the badminton.
Space positioning is another necessary condition for grabbing, and to control the intelligent agent to automatically pick up the shuttlecocks, the space coordinate position of the shuttlecocks must be informed. Location information can generally be obtained in three ways: radar-based, laser-based, and vision-based. As a typical non-contact scheme, the ultrasonic sensor has the advantages of good real-time performance, high measurement accuracy and the like. But the ultrasonic distance measurement result is easily influenced by the surface size and the material characteristics of the target object, and the measurement range is relatively short; although having a high-precision measurement effect, the laser ranging apparatus is costly and difficult to perform distance estimation on a moving object. Compared with the two mechanisms, the machine vision has a larger field range, only needs instant image information in operation, can estimate the distance of the target in the field of view, and is irrelevant to the motion mode of the target.
Accurate pose estimation can provide a good basis for subsequent badminton path prediction and computational tracking.
Disclosure of Invention
In view of the above, the invention provides a binocular stereo vision badminton positioning, identifying and posture method, which has the characteristics of accurate positioning, accurate matching of characteristic points and accurate calculation of posture angles.
The invention is realized by the following technical scheme:
a binocular stereoscopic vision badminton positioning identification and posture calculation method comprises the following steps:
s1): collecting images of different postures of the badminton as a sample data set;
s2): constructing a deep neural network based on a fast-RCNN algorithm, wherein the fast-RCNN algorithm comprises a candidate box extraction module and a detection module;
s3): training the deep neural network based on a sample data set to obtain a training model of the fast-RCNN, wherein the training model of the fast-RCNN comprises a convolutional layer, a region generation network, an interest region pooling layer and a classifier, the convolutional layer extracts feature mapping of images, the region generation network generates candidate regions, the interest region pooling layer collects input feature mapping and candidate sets and extracts candidate feature mapping, the classifier calculates the category of the candidate sets by using the candidate feature mapping, and meanwhile, frame regression is performed again to obtain the accurate position of an identification target;
s4): identifying the shuttlecocks by using a Faster-RCNN training model and determining the positions of the shuttlecocks in the images;
s5): performing three-dimensional reconstruction on the identified shuttlecocks by using binocular stereoscopic vision, and determining the spatial positions of the shuttlecocks;
s6): carrying out binarization and filtering processing on the badminton image in the step S5) by adopting a Canny operator, extracting clear edges, and calculating a real-time attitude angle of the badminton;
in the step S5), the three-dimensional reconstruction of the shuttlecock by the binocular stereoscopic vision comprises the following steps:
p1): rotating and translating the world coordinate system to obtain coordinates of a binocular camera coordinate system;
p2): calculating the physical coordinates corresponding to the badminton image according to triangularization transformation;
p3): based on the mapping relation between the physical size and the pixel unit, acquiring the pixel coordinate of the badminton in the image by using a least square method;
the calculation method of the attitude angle in step S6) includes the steps of:
t1): randomly taking three points from the edge of the extracted badminton to construct a marking circle;
t2): arbitrarily taking three different points from the marking circle to form three different space vectors;
t3): arbitrarily taking two space vectors to perform cross multiplication to obtain a normal vector of the marked circle;
t4): and (4) calculating an included angle between the projection of the vector in the horizontal plane and the horizontal axis, namely the posture angle of the badminton.
The invention discloses a binocular stereo vision badminton positioning identification and posture calculation method, which adopts a fast-RCNN deep learning network to carry out target detection and identification of badminton, binocular stereo vision space positioning and posture estimation algorithm of space vector projection of Canny operator, effectively solves target identification, greatly improves positioning precision and posture angle of badminton, provides a necessary data interface for automatic badminton pickup, and has the characteristics of accurate positioning, accurate characteristic point matching and accurate posture angle calculation.
Drawings
Fig. 1 is a flow chart of binocular stereo vision badminton positioning identification and posture calculation.
FIG. 2 is the result of the fast-RCNN target recognition.
Fig. 3 is a binocular vision hardware system.
FIG. 4 is a graph of the results of spatial positioning of shuttlecocks.
FIG. 5 is an edge extraction graph of the Candy operator with region filtering.
FIG. 6 is a schematic diagram of any three points at the edge of the shuttlecock after extraction.
FIG. 7 is a schematic diagram of a space vector projection of a marker circle.
Wherein:
1. binocular stereo camera, 2 binocular vision hardware system.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is obvious that the described embodiments are only a part of the implementations of the present invention, and not all implementations, and all other embodiments obtained by those of ordinary skill in the art based on the embodiments of the present invention without any inventive work are within the scope of the present invention.
As shown in fig. 1, a binocular stereo vision badminton positioning identification and posture calculation method includes the following steps:
s1): collecting images of different postures of the badminton as a sample data set;
s2): constructing a deep neural network based on a fast-RCNN algorithm, wherein the fast-RCNN algorithm comprises a candidate box extraction module and a detection module;
s3): training the deep neural network based on a sample data set to obtain a training model of the fast-RCNN, wherein the training model of the fast-RCNN comprises a convolutional layer, a region generation network, an interest region pooling layer and a classifier, the convolutional layer extracts feature mapping of images, the region generation network generates candidate regions, the interest region pooling layer collects input feature mapping and candidate sets and extracts candidate feature mapping, the classifier calculates the category of the candidate sets by using the candidate feature mapping, and meanwhile, frame regression is performed again to obtain the accurate position of an identification target;
s4): identifying the shuttlecocks by using a Faster-RCNN training model and determining the positions of the shuttlecocks in the images;
s5): performing three-dimensional reconstruction on the identified shuttlecocks by using binocular stereoscopic vision, and determining the spatial positions of the shuttlecocks;
s6): carrying out binarization and filtering processing on the badminton image in the step S5) by adopting a Canny operator, extracting clear edges, and calculating a real-time attitude angle of the badminton;
the fast-RCNN deep neural network has the advantages of high generation speed and high recognition rate. The fast-RCNN abandons the traditional sliding window, creatively adopts the convolutional network to generate the proposal frames by self, and shares the convolutional network with the target detection network, so that the number of the proposal frames is reduced from about 2000 original proposal frames to 300 proposal frames, the quality of the proposal frames is also substantially improved, and the generation speed of the detection frames is greatly improved.
The convolutional layer is a CNN network target detection method, the fast-RCNN firstly uses a group of basic convolutional pooling layers to extract the feature mapping of the image, and the feature mapping is shared to be used for a subsequent region generation layer and a full connection layer;
the area generation network judges whether the candidate frame is suitable through a softmax () function, and then corrects the candidate frame by frame regression to obtain an accurate candidate set;
and the interest region pooling layer collects the input feature mapping and the candidate set, extracts the candidate feature mapping after integrating the information, and sends the candidate feature mapping to a subsequent full-connection layer to judge the target category.
The method comprises the steps of adopting 150 badminton samples collected on site, inputting the badminton samples into a Faster-RCNN deep learning neural network for training, wherein the badminton samples comprise badmintons with different illumination conditions, different breaking conditions and different brands, the whole training process takes 40 minutes, identifying targets by using training results, and as shown in figure 2, the badminton target identification result is obtained, the Faster-RCNN deep learning neural network can quickly and accurately identify the single target and the multiple targets in an image, and the time consumption in the identification process is 1.2-1.5 s. Under the condition of short distance, the network still has good identification effect on a partially-shielded target, which is mainly because a partially-shielded sample is introduced in the training process, the convolution layer can better obtain the regional characteristics of the target part, and in order to improve the identification accuracy, the camera can be moved in the actual process to avoid the shielding situation.
In the step S5), the three-dimensional reconstruction of the shuttlecock by the binocular stereoscopic vision comprises the following steps:
p1): rotating and translating the world coordinate system to obtain coordinates of a binocular camera coordinate system;
p2): calculating the physical coordinates corresponding to the badminton image according to triangularization transformation;
p3): based on the mapping relation between the physical size and the pixel unit, acquiring the pixel coordinate of the badminton in the image by using a least square method;
the basic principle of binocular space positioning is based on the mutual conversion relation of image pixel coordinates, camera coordinates and world coordinates, firstly, world coordinate system coordinates (X, Y, Z) are rotated and translated to obtain camera coordinate system coordinates (Xc, Yc, Zc); and then calculating the corresponding image physical coordinates (u, v) according to the triangularization transformation, and finally acquiring the pixel coordinates of the image based on the mapping relation between the physical size and the pixel unit.
The above process can be described by the following mathematical expression:
the conversion between the world coordinate system (X, Y, Z) and the image physical coordinates (u, v) is shown in equation (1):
Figure 712774DEST_PATH_IMAGE001
(1)
wherein the content of the first and second substances,
Figure 734082DEST_PATH_IMAGE002
f x equivalent focal length
f y Equivalent focal length
s-tilt factor
(x 0 ,y 0 ) -optical centre coordinates
R-rotation matrix
T-translation matrix
In order to ensure that the water-soluble organic acid,
Figure 486137DEST_PATH_IMAGE003
obtaining:
Figure 308599DEST_PATH_IMAGE004
(2)
finally, solving and summing by using a least square method to obtain the physical coordinates (u, v) of the image and the coordinates (X, Y, Z) of the world coordinate system
The spatial positioning of the target can be realized.
As shown in fig. 3, the binocular vision hardware system 2 adopts a manihold device, the manihold has drawing capability at the level of a PC independent graphics card, supports DirectX and OpenGL, carries a Ubuntu operating system, can conveniently install and run Linux software, supports CUDA, OpenCV, ROS and the like, and can realize complex image processing, and the manihold device includes a binocular stereo camera 1.
The binocular stereo camera 1 is installed by the aid of the three-axis gyroscope, so that the left camera and the right camera are located on the same horizontal plane as much as possible, the left camera and the right camera are closer to a convergence baseline model, and measurement errors are reduced theoretically. In addition, the software adopts a recalibration technology, corrects the image by using the first calibration result, recalibrates the corrected image, and performs space positioning by using the result so as to reduce error interference and effectively improve the precision of the calibration result.
The binocular stereo camera 1 is characterized in that the physical focal length is 8mm, the single-frame pixel resolution is 800 x 600, the binocular camera is calibrated, and internal and external parameters of the binocular camera are determined. The internal parameters mainly comprise optical center coordinates, equivalent focal length and distortion coefficients; the external parameters comprise rotation and translation matrixes between the two cameras. The experiment adopts Zhangzhengyou calibration method, wherein the specification of the checkerboard is 8 × 7, and the size of a single check is 100 × 100 mm.
The camera internal parameter calibration result is as follows:
the optical center coordinates of the left camera are (430, 315),f x =5769.6,f y =5769.6;
the optical center coordinates of the right camera are (428, 319),f x =5769.6,f y =5769.6。
the calibration result of the external parameters of the camera is as follows:
Figure 575633DEST_PATH_IMAGE005
wherein, R is a rotation matrix, and T is a space translation matrix of the right camera relative to the left camera.
As shown in fig. 4, the badminton is spatially positioned by using the calibration result, six points of the badminton in the space are selected in fig. 4, the binocular stereo camera performs spatial positioning calculation on the selected six points, the vertical axis in the diagram represents the spatial position of the badminton, the horizontal axis represents the number of points of the badminton in the selected spatial position, the square represents the real value of the spatial position of the badminton, the circle represents the measured value of the spatial position, and the graph (a) is a relation graph of the real value of the spatial position of the badminton and the decomposition amount of the measured value on the x axis; the graph (b) is a relation graph of the real value of the badminton space position and the decomposition amount of the measured value on the y axis; the graph (c) is a relation graph of the real value of the badminton space position and the decomposition amount of the measured value on the z-axis; the graph (d) is a relation graph of the real value and the measured value of the space position of the badminton; the measured values and the real values of the spatial positions of the shuttlecocks of the six points in the shuttlecock space in the x-axis direction and the z-axis direction are approximately coincident, the measured values of the spatial positions of the shuttlecocks in the y-axis direction are uniformly distributed on two sides of the real values, and the measured values and the real values of the spatial positions of the six points are also approximately coincident after the measured values and the real values are combined in the three spatial directions of the x-axis direction, the y-axis direction and the z-axis direction, so that the shuttlecocks have.
The calculation method of the attitude angle in step S6) includes the steps of:
t1): randomly taking three points from the edge of the extracted badminton to construct a marking circle;
t2): arbitrarily taking three different points from the marking circle to form three different space vectors;
t3): arbitrarily taking two space vectors to perform cross multiplication to obtain a normal vector of the marked circle;
t4): and (4) calculating an included angle between the projection of the vector in the horizontal plane and the horizontal axis, namely the posture angle of the badminton.
As shown in fig. 5-7, the Canny operator extracts the edge map of the badminton after the edge extraction, the Canny operator uses two different thresholds to detect the strong edge and the weak edge respectively, and when the weak edge and the strong edge are connected, the weak edge is included in the output image, and the Canny method is not easily interfered by noise, can detect the real weak edge, and can know that the badminton edge extraction is good. Fig. 5 (c) is a profile of the shuttlecock after edge extraction by using the Canny operator, (d) is a profile of the shuttlecock after edge filtering by using the Canny operator, and fig. 6 is a schematic diagram of any three points at the edge of the extracted shuttlecock profile.
As shown in FIG. 7, at the edge of the shuttlecock, three points are selected according to FIG. 6Constructing a mark circle, and selecting three different points on the mark circle to form three space vectorsP 1 P 2 P 3
Randomly selecting two space vectors from the three different space vectors, and calculating a normal vector of the marking circle;
calculating the included angle of the normal vector in the plane and the axial projection; where e is the x axial unit vector. The clamping is the attitude angle of the badminton, the calculation method of the attitude angle is high in precision, when the actually measured attitude angle of the badminton is 90 degrees, the calculated attitude angle is 87.8 degrees, the error is 2.4 degrees, when the actually measured attitude angle of the badminton is 63 degrees, the calculated attitude angle is 59.3 degrees, the error is 5.8 degrees, when the actually measured attitude angle of the badminton is 12.5 degrees, the calculated attitude angle is 9.4 degrees, the error is 2.4 degrees, and the calculation precision of the attitude angle is relatively high.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (3)

1. A binocular stereoscopic vision badminton positioning identification and posture calculation method is characterized by comprising the following steps:
s1): collecting images of different postures of the badminton as a sample data set;
s2): constructing a deep neural network based on a fast-RCNN algorithm, wherein the fast-RCNN algorithm comprises a candidate box extraction module and a detection module;
s3): training the deep neural network based on a sample data set to obtain a training model of the fast-RCNN, wherein the training model of the fast-RCNN comprises a convolutional layer, a region generation network, an interest region pooling layer and a classifier, the convolutional layer extracts feature mapping of images, the region generation network generates candidate regions, the interest region pooling layer collects input feature mapping and candidate sets and extracts candidate feature mapping, the classifier calculates the category of the candidate sets by using the candidate feature mapping, and meanwhile, frame regression is performed again to obtain the accurate position of an identification target;
s4): identifying the shuttlecocks by using a Faster-RCNN training model and determining the positions of the shuttlecocks in the images;
s5): performing three-dimensional reconstruction on the identified shuttlecocks by using binocular stereoscopic vision, and determining the spatial positions of the shuttlecocks;
s6): and (4) carrying out binarization and filtering processing on the badminton image in the step S5) by adopting a Canny operator, extracting clear edges, and calculating the real-time attitude angle of the badminton.
2. The binocular stereoscopic vision badminton positioning, identifying and posture calculating method according to claim 1, characterized in that: in the step S5), the three-dimensional reconstruction of the shuttlecock by the binocular stereoscopic vision comprises the following steps:
p1): rotating and translating the world coordinate system to obtain coordinates of a binocular camera coordinate system;
p2): calculating the physical coordinates corresponding to the badminton image according to triangularization transformation;
p3): and acquiring the pixel coordinates of the shuttlecock in the image by using a least square method based on the mapping relation between the physical size and the pixel unit.
3. The binocular stereoscopic vision badminton positioning, identifying and posture calculating method according to claim 1, characterized in that: the calculation method of the attitude angle in step S6) includes the steps of:
t1): randomly taking three points from the edge of the extracted badminton to construct a marking circle;
t2): arbitrarily taking three different points from the marking circle to form three different space vectors;
t3): arbitrarily taking two space vectors to perform cross multiplication to obtain a normal vector of the marked circle;
t4): and (4) calculating an included angle between the projection of the vector in the horizontal plane and the horizontal axis, namely the posture angle of the badminton.
CN201910859889.1A 2019-09-11 2019-09-11 Binocular stereo vision badminton positioning identification and posture calculation method Active CN110648362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910859889.1A CN110648362B (en) 2019-09-11 2019-09-11 Binocular stereo vision badminton positioning identification and posture calculation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910859889.1A CN110648362B (en) 2019-09-11 2019-09-11 Binocular stereo vision badminton positioning identification and posture calculation method

Publications (2)

Publication Number Publication Date
CN110648362A true CN110648362A (en) 2020-01-03
CN110648362B CN110648362B (en) 2022-09-23

Family

ID=68991249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910859889.1A Active CN110648362B (en) 2019-09-11 2019-09-11 Binocular stereo vision badminton positioning identification and posture calculation method

Country Status (1)

Country Link
CN (1) CN110648362B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085770A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Binocular multi-target matching and screening method for table tennis track capture
CN112494915A (en) * 2020-12-14 2021-03-16 清华大学深圳国际研究生院 Badminton robot and system and control method thereof
CN113696178A (en) * 2021-07-29 2021-11-26 大箴(杭州)科技有限公司 Control method and system, medium and equipment for intelligent robot grabbing
CN117689717A (en) * 2024-02-01 2024-03-12 青岛科技大学 Ground badminton pose detection method for robot pickup

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107494A1 (en) * 2015-12-25 2017-06-29 深圳市酷浪云计算有限公司 Method and device for recognizing badminton racket swinging motion
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
CN109345568A (en) * 2018-09-19 2019-02-15 深圳市赢世体育科技有限公司 Sports ground intelligent implementing method and system based on computer vision algorithms make
CN109344882A (en) * 2018-09-12 2019-02-15 浙江科技学院 Robot based on convolutional neural networks controls object pose recognition methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017107494A1 (en) * 2015-12-25 2017-06-29 深圳市酷浪云计算有限公司 Method and device for recognizing badminton racket swinging motion
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
CN109344882A (en) * 2018-09-12 2019-02-15 浙江科技学院 Robot based on convolutional neural networks controls object pose recognition methods
CN109345568A (en) * 2018-09-19 2019-02-15 深圳市赢世体育科技有限公司 Sports ground intelligent implementing method and system based on computer vision algorithms make

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张琦等: "改进Fast-RCNN的双目视觉车辆检测方法", 《应用光学》 *
蒋强卫等: "基于CNN双目特征点匹配目标识别与定位研究", 《无线电工程》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085770A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Binocular multi-target matching and screening method for table tennis track capture
CN112494915A (en) * 2020-12-14 2021-03-16 清华大学深圳国际研究生院 Badminton robot and system and control method thereof
CN113696178A (en) * 2021-07-29 2021-11-26 大箴(杭州)科技有限公司 Control method and system, medium and equipment for intelligent robot grabbing
CN117689717A (en) * 2024-02-01 2024-03-12 青岛科技大学 Ground badminton pose detection method for robot pickup
CN117689717B (en) * 2024-02-01 2024-05-28 青岛科技大学 Ground badminton pose detection method for robot pickup

Also Published As

Publication number Publication date
CN110648362B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN110648362B (en) Binocular stereo vision badminton positioning identification and posture calculation method
CN108416791B (en) Binocular vision-based parallel mechanism moving platform pose monitoring and tracking method
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
JP6236600B1 (en) Flight parameter measuring apparatus and flight parameter measuring method
CN111210477B (en) Method and system for positioning moving object
US20170287166A1 (en) Camera calibration method using a calibration target
CN111553252B (en) Road pedestrian automatic identification and positioning method based on deep learning and U-V parallax algorithm
CN107481284A (en) Method, apparatus, terminal and the system of target tracking path accuracy measurement
CN112233177B (en) Unmanned aerial vehicle pose estimation method and system
CN110675453B (en) Self-positioning method for moving target in known scene
CN113850865A (en) Human body posture positioning method and system based on binocular vision and storage medium
Yan et al. Joint camera intrinsic and lidar-camera extrinsic calibration
Ding et al. Research on computer vision enhancement in intelligent robot based on machine learning and deep learning
CN112396656A (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN110065075A (en) A kind of spatial cell robot external status cognitive method of view-based access control model
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
CN113487726B (en) Motion capture system and method
CN115147344A (en) Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance
CN110349209A (en) Vibrating spear localization method based on binocular vision
CN113221953A (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
Song et al. Calibration of event-based camera and 3d lidar
CN108180825A (en) A kind of identification of cuboid object dimensional and localization method based on line-structured light
CN116021519A (en) TOF camera-based picking robot hand-eye calibration method and device
CN102542563A (en) Modeling method of forward direction monocular vision of mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant